0.9.2 -- Yet another bug with dml/logdel replication. Updates may not be applied to the destination. This bug was introduced in the new trigger created in 0.9.1. If you've not created any new replication jobs with 0.9.1, all dml/logdel replication jobs with a single primary/unique key column are fine. If you have created any, a new trigger will have to be made on the source table so the replication jobs should be recreated (run destroyer then maker functions. Note only the table owner can drop a trigger). -- Please note that if you use composite primary/unique keys (more than one column), you will still need to re-create your replication jobs for dml & logdel replication to get a new trigger installed on the source if you are using any version older than this one. (Backup your destination logdel tables first to preserve the deleted rows). Single column primary/unique keys only have an issue with triggers created with 0.9.1. -- Made the source of the trigger functions more human readable. 0.9.1 -- Fix bug introduced in 0.9.0 that would prevent composite primary/unique key replication from working in dml & logdel replication. This was introduced in 0.8.3 to try and handle when one column of the composite key would change. Data isn't missed on the destination anymore like previous to 0.8.3, replication can just completely fail due to unique key violations (errors will show up in pg_jobmon logs or when you attempt to manually run the replication). In order to fix this bug, you will have to re-create any dml replication jobs that use a composite primary/unique key to update the trigger function. Jobs that have a single column key work fine. -- IMPORTANT: Backup your destination table for logdel replication in order to preserve your deleted rows. These can then be inserted back in after things are fixed. -- If a table's refresh job tries to run concurrently, it will set the main entry in pg_jobmon's job_log table to level 2 (WARNING). Will allow 3 consecutive concurrent run attempts to cause pg_jobmon to raise a warning alert to possible problems. 0.9.0 -- IMPORTANT NOTE: This update requires the new 1.0.0 version of pg_jobmon that was shortly before this update. Please update pg_jobmon before updating mimeo! -- Remove explicit calls to pg_jobmon schema. Would cause failure if pg_jobmon was installed in any schema other than "jobmon". -- Changed refresh functions that call dblink multiple times to use only a single, named connection instead of a new unnamed one for every call. Also ensured named connections are unique to each tables' refresh job to prevent conflicts. -- Fix repull option in dml to clear the queue table on the source database. -- Reaching the batch limit for any refresh function will now cause a level 2 (warning) alert to be triggered in pg_jobmon. Helps to prevent replication falling behind by giving warning that a higher rate of change is possibly happening on the source than the destination can handle. -- Changed all references to "pk_field" variable and column name to "pk_name". Was bad coding practice on my part using both names for the same thing throughout development. Changes visible to users are the optional argument name in updater/dml/logdel_maker & the column name in the refresh_config_updater/dml/logdel tables. -- Added pgTAP tests for data repull options 0.8.4 -- No changes to mimeo core code. -- Fixed Makefile to use egrep instead of trying to allow GREP env variable. Latter option wasn't working as expected on non-gnu environment (testing on OmniOS - http://omnios.omniti.com/). -- Fixed pgTAP destroyer tests so they don't destroy any non-test replication jobs or tables. 0.8.3 -- Fixed dml refresh not propagating updates and deletes. This bug was introduced in v0.7.0 when trying to simplify the refresh process. You may have to repull data for any dml jobs that have run with that version or later to bring the destination back into sync with the source. -- Fixed dml/logdel refresh not updating a row if it has a multi-column primary/unique key and only a subset of the columns of that key are changed. This was not a new bug and has been an issue from the beginning. You may have to repull data for any dml/logdel jobs that have run to bring the destination back into sync with the source. Be aware that a full refresh of a logdel table will remove the deleted rows that were logged to the destination. Recommend backing those tables up before a full refresh. -- Fixed edge case in refresh_dml/logdel where, if the batch limit was hit, the remote queue table might not mark the processed rows properly. -- Changed tests to use pgTAP. Testing suite is now much more extensive and helped find above bugs. 0.8.2 -- Moved index creation step after data insertion. This will apply to all maker functions as well. 0.8.1 -- IMPORTANT NOTE: The automatic indexes that were being created in versions <= 0.8.0 may not have kept the columns in the correct order for multi-column indexes. Please double-check any primary keys, unique indexes, or indexes created on the destination with versions <= 0.8.0 -- Fixed above issue so that any indexes propagated from the source are created properly on the destination. 0.8.0 -- IMPORTANT NOTE: Signatures on maker functions & refresh_snap() have changes so they were dropped and recreated. Check permissions if needed before and after update. -- Automatic creation of indexes with maker functions. Does not automatically propogate future changes to indexes with refresh runs. Allows source and destination to be different (ex. often data warehouse destinations do not need indexes to save on space. Also prevents issues with partitioned destination tables). -- Ensure primary or unique indexes are always made on destination tables when using dml/logdel_maker() (update_maker() was already properly doing this). Will do this even when new p_index option is set to false. -- Changed funtion parameter 'p_pk_field' to 'p_pk_name' to be more consistent with other internal variable names. -- update_maker() now checks that if the column filter option is used, all columns that are part of primary/unqiue key are included. -- Fixed dml/logdel_destroyer() functions to actually remove the objects on the remote database. -- Fixed manually setting the primary/unique key types with the maker function parameter p_pk_type. -- Updated Makefile to allow setting of grep binary if needed during building. 0.7.2 -- IMPORTANT: To keep this update from interfering the least with current jobs, unschedule any running jobs. During this update, all jobs that try to run will be held until it is complete. -- Created new parent table column timestamptz 'last_run' that is used by run_refresh() to track when the job last ran. -- Changed last_value to only be a child table column in inserter & updater child config tables. -- Set run_refresh() to default to a batch number of 4 if no argument given. -- Created howto.md file in /doc folder to give more detailed setup and usage instructions for first time users 0.7.1 -- Fixed issue where columns with a fixed length would not migrate over properly (Ex: bit(7) would come over as bit(1)) 0.7.0 -- IMPORTANT NOTE: ALL maker functions were dropped and replaced with a new signature in this update. Please check permissions and function calls before and after update. -- Added support for an array list filter that can be used to designate only specific columns that should be used for replication. -- Source table trigger for dml/logdel types will only fire on UPDATES of the given columns (uses UPDATE OF col1 [, col2...]). -- Added support for conditional WHERE statement when pulling data. -- All conditional statements MUST either start with the 'WHERE' keyword or a comma separated list of tables that will be used in conditionals (must begin with a comma before first table in this case). -- Examples: (..., p_condition := 'WHERE col1 > 4 AND col2 < ''test'''), (..., p_condition := ', table2, table3 WHERE table1.col1 = table2.col1 AND table1.col3 = table3.col3') -- JOINS are NOT guarenteed to work in all cases at this time (mostly for incremental. may work in snap & dml). -- For logdel, DOES NOT apply the condition to rows that are deleted from the source table. Ensures all deleted rows on source are logged for warehousing. -- Fixed inserter & updater refresh to allow either p_repull_start or p_repull_end arguements and not require both. Allows to repull everything greater than p_repull_start or everything less than p_repull_end. -- Refresh functions will now handle job logging properly and give a clearer error message if the run fails before the job_id is actually created. -- Updated docs 0.6.1 -- Fixed dml & logdel queue objects on source to include the schema name as part of the queue table, queue function and queue trigger name. Fixes issues with tables of the same name in different schemas on source database not working for these types of replication due to name conflicts. -- NOTE: Existing jobs shouldn't be affected by this and you don't have to remake any of your jobs unless you run into this issue. All newly created jobs after this update are installed with the new queue naming format. 0.6.0 -- IMPORTANT NOTE: Before installation check permissions on the following functions that were dropped. They've got a new signature so will need to be granted the previous versions' permissions. -- updater, dml, and logdel maker functions can now automatically obtain the primary key or unique index from the source table. Parameters to manually set the key columns are still part of the maker functions if needed, but are now optional. -- Made source_table column in config table unique for dml and logdel replication. Cannot have multiple jobs with same source due to source queue tables. -- For all but snap, make destroyer functions more intelligent so they won't accidentally destroy local tables that aren't set up with mimeo. -- dml_maker() & logdel_maker() now clean up after themselves on the source database tables if a make run fails. They will remove the queue table, function & trigger if and only if configuration information for the source table given does not exist in their respective configuration table. -- New p_pulldata option for all maker functions to allow not pulling data from source if desired. It is set to TRUE by default. -- Documentation updates 0.5.3 -- Allow inserter, updater, dml & logdel maker functions to handle the destination table already existing. In that case, the destination table will not be touched and no data will be pulled from the source. For inserter & updater, the last_value function will either be given the max value of the current destination data or the timestamp at the time the maker function runs. -- Updated documentation -- Some code cleanup for simplification & clarity 0.5.2 -- Fixed all temp tables not getting removed in refresh_dml(). Caused errors if there were no new rows for consecutive runs in the same session. 0.5.1 -- Fixed table definition for refresh_config table to not use hardcoded schema name for type column -- Added public schema to functions that change the search_path for their run. Fixes issues with finding certain objects located in public schema 0.5.0 -- Restructured SQL source files in /sql folder. Run 'make' to create the single file needed for extension installation or just cat all the files in /sql/tables and sql/functions together in the properly formatted filename. -- IMPORTANT NOTE: All maker functions have been dropped and recreated. Please check permissions before and after update! -- Created dml_maker, logdel_maker, dml_destroyer, logdel_destroyer functions. Will require a schema on the source database that mimeo replication user owns. Assumed to be the same schema as where the extension is installed on the destination. Will also require giving the mimeo replication user trigger privileges on the source table. -- Fixed refresh_dml to actually delete rows that were deleted on the source -- Removed temporary table creation in snapshot_destroyer if ARCHIVE was set. Now renames the current snap table to the old view name. This allows any permissions, indexes, etc to be kept. -- Changed table drop statements in snapshot_destroyer to be more friendly with other parts of extension (DROP IF EXISTS) -- Simplify maker functions to only have one version and more efficiently create the local table using the now better snapshot_destroyer. Custom destination table name is an optional argument. Default is NULL and maker will create destination table with same schema and tablename as the source unless this parameter is set. -- Update auth() function to support passwordless authentication string. 0.4.6 -- Fixed bug in refresh_snap that was causing the post_scripts not to run when a change on source schema happened -- Removed unused mviews table 0.4.5 -- Added run_refresh function to allow easier scheduling of running batch jobs. Uses new period column in config tables to determine how often a job should run -- Added batch_limit column in parent config table. This will be used by default for refresh jobs that use it, but p_limit parameter to the function can override it. Needed to do this so run_refresh can set limits and put in parent table just to make things easier since snap is the only thing that doesn't use it. -- Handle QUERY_CANCELED exception. Only releases advisory lock to prevent a manual run/cancel locking out all other jobs. REMINDER that a mimeo replication job should be handled with the jobmon.cancel_job() function to properly log the cancellation. 0.4.4 -- IMPORTANT UPDATE NOTE: Old versions of functions were dropped. Check function permissions before and after this update to ensure they're reset properly. -- Allow refresh inserter/updater to repull either all data or a specified time period of data from the source. refresh dml can do a full refresh of data from source. Did NOT set this up for refresh logdel at this time. -- Make debug parameter optional. Default false. Can be turned on by using named parameter option with functions that have multiple defaults. Batch limit should now be called as named parameter option as well to ensure future compatibility. 0.4.3 -- Handle edge case when refresh inserter/updater batch is equal to the limit and the limit cuts off timestamps of equal value past the batch limit. In this case, the rows with the upper boundary timestamp will be removed from the batch. If the batch is equal to the limit and all rows contain exactly the same timestamp value, this will cause a job failure. The batch limit must be increased to handle it. -- Simplified how refresh_updater figures out its upper boundary value. -- Named optional limit parameter in refresh functions (p_limit). -- Fixed resetting search path in advisory lock attempt. -- Fixed spelling of boundary (boundry) in refresh_updater. Mispelled variables make debugging a pita. 0.4.2 -- Fix exceptions in remaining functions to handle an exception being thrown before first logged step. inserter/updater were fixed in 0.4.1 0.4.1 -- Fix inserter/updater timestamp based refresh to be able to handle DST for servers not running in GMT/UTC -- IMPORTANT NOTE: All jobs made before this update will default to the dst_active config option being true. -- BE SURE TO CHECK YOUR CONFIGURATION SO IT IS SET ACCORDINGLY! I set it to true to ensure data isn't missed by accident for existing jobs. -- But this will cause replication to stop during DST time changes. Please plan accordinly. -- Any new jobs created using the inserter/updater maker functions will set the dst_active option based on the result of the dst_utc_check() function. 0.4.0 -- Restructured config table. Made a child table for each refresh type inheriting from a generic parent. Allows tighter control of data and easier extension maintenance -- Simplified inserter/updater destroyer functions -- Fixed inserter/updater/dml/logdel refresh functions to better handle no new rows from source -- Fixed inserter/updater maker functions to set proper type in config table and changed boundary parameter from text to interval -- Cleaned up unused variables in functions -- More consistent code formatting of functions 0.3.3 -- Added new updater_maker and updater_destroyer functions. Also added support for composite keys in refresh_updater function. 0.3.2 -- Added new inserter_maker and inserter_destroyer functions 0.3.1 -- Made dblink_mapping.data_source_id column a real serial column (default is the next sequence ID) to make setup easier -- Made non-existent database link ID error a little clearer -- Made snapshot_destroyer parameter name clearer in what its use is. Required dropping function so please re-check your function permissions. -- Documentation update. 0.3.0 -- Added new snapshot_maker and snapshot_destroyer functions 0.2.3 -- Added ORDER BY to remote select query to fix missing data on destination when the limit is actually used 0.2.2 -- Changed to using pg_try_advisory_lock and failing gracefully when concurrent jobs are running. Logs that job didn't run and why 0.2.1 -- Actually fix the dupe issue with inserter function 0.2.0 -- Renamed refresh_incremental to refresh_inserter -- Fixed bug in refresh_inserter that would cause duped inserts and/or missing data -- Added refresh_updater -- Added type column to config table. Allows easier automation (Ex: cronjob to run all snaps) -- NOTE: After update of this table, set the type for all current jobs and then set column to NOT NULL