psql shell
3 TopicsPostgreSQL: Migrating Large Objects (LOBs) with Parallelism and PIPES
Why This Approach? Migrating large objects (LOBs) between PostgreSQL servers can be challenging due to size, performance, and complexity. Traditional methods often involve exporting to files and re-importing, which adds overhead and slows down the process. Also, there could be some Cloud restrictions that limit the usage of other tools like: Pgcopydb: Welcome to pgcopydb’s documentation! — pgcopydb 0.17~dev documentation Or other techniques like using RLS: SoporteDBA: PostgreSQL pg_dump filtering data by using Row Level Security (RLS) This solution introduces a parallelized migration script that: Reads directly from pg_largeobject. Splits work across multiple processes using the MOD() function on loid. Streams data via PIPES—no intermediate files. Scales easily by adjusting parallel degree. *Possible feature*: Commit size logic to supports resume logic by excluding already migrated LOBs. Key Benefits Direct streaming: No temporary files, reducing disk I/O. Parallel execution: Faster migration by leveraging multiple processes. Simple setup: Just two helper scripts for source and destination connections. Reference Scripts reference: Moving data with PostgreSQL COPY and \COPY commands | Microsoft Community Hub Source Connection -- pgsource.sh -- To connect to source database -- PLEASE REVIEW CAREFULLY THE CONNECTION STRINGS TO CONNECT TO SOURCE SERVER PGPASSWORD=<password> psql -t -h <sourceservername>.postgres.database.azure.com -U <username> <database> -- Permissions to execute chmod +x pgsource.sh Destination Connection -- pgdestination.sh -- To connect to target database -- PLEASE REVIEW CAREFULLY THE CONNECTION STRINGS TO CONNECT TO DESTINATION SERVER PGPASSWORD=<password> psql -t -h <destinationservername>.postgres.database.azure.com -U <username> <database> -- Permissions to execute chmod +x pgdestination.sh Parallel Migration Script -- transferlobparallel.sh -- To perform the parallel migrations of lobs echo > nohup.out echo 'ParallelDegree: '$1 'DateTime: '`date +"%Y%m%d%H%M%S"` # Check if no large objects to migrate count=$(echo "select count(1) from pg_largeobject;"|./pgsource.sh) count=$(echo "$count" | xargs) if [ "$count" -eq 0 ]; then echo "There are no large objects to migrate. Stopping the script." exit 0 fi par=$1 for i in $(seq 1 $1); do nohup /bin/bash <<EOF & echo "\copy (select data from (select 0 as rowsort, 'begin;' as data union select 1, concat('SELECT pg_catalog.lo_create(', lo.loid, ');SELECT pg_catalog.lo_open(', lo.loid, ', 131072);SELECT pg_catalog.lowrite(0,''', string_agg(lo.data, '' ORDER BY pageno), ''');SELECT pg_catalog.lo_close(0);') from pg_largeobject lo where mod(lo.loid::BIGINT,$par)=$i-1 group by lo.loid union select 2, 'commit;') order by rowsort) to STDOUT;"|./pgsource.sh|sed 's/\\\\\\\/\\\\/g'|./pgdestination.sh; echo "Execution thread $i finished. DateTime: ";date +"%Y%m%d%H%M%S"; EOF done tail -f nohup.out|grep Execution -- Permissions to execute chmod +x transferlobparallel.sh -- NOTE Please pay attention during the script execution, it will never finish as last line is “tail -f nohup…”, you need to monitor if all the threads already finished or checking in a different session the “psql” processes are still working or not. How to Run ./transferlobparallel.sh <ParallelDegree> Example: ./transferlobparallel.sh 3 Runs 3 parallel processes to migrate LOBs directly to the destination server. Performance Results Please find basic metrics taking into consideration that client linux VM with accelerated networking was co-located in same region/AZ than source and target Azure Database for PostgreSQL Flexible servers, servers and linux client based in Standard 4CPU SKUs, no other special configurations. 16 threads: ~11 seconds for 500MB. CPU usage: ~15% at 16 threads. Estimated migration for 80GB: ~30 minutes Key Takeaways This approach dramatically reduces migration time and complexity. By combining PostgreSQL’s pg_largeobject with parallelism and streaming, without intermediate storage, and using only psql client command as required client software. Disclaimer The script is provided as it is. Please review carefully when running/testing, the script is just a starting point to show how to migrate large objects in a parallelized way without intermediate storage, but it can be also implemented/improved using other methods as mentioned.Working with Data in Azure Cosmos DB for PostgreSQL using Open PSQL Shell.
Developing a multi-tenant app on PostgreSQL? Hitting scaling walls? Discover how Azure Cosmos DB for PostgreSQL extends your beloved PostgreSQL with superpowers: global distribution, effortless scaling, and simplified high availability. This guide equips you to conquer multi-tenancy challenges and build robust, scalable applications What covered in this Blog Challenges with traditional PostgreSQL What is Azure Cosmos DB for PostgreSQL Citus for PostgreSQL. Provisioning Azure Cosmos DB for PostgreSQL via azure Portal. Open PSQL Shell on Azure (new feature on preview) How to create & query distribute tables in Azure Cosmos DB for PostgreSQL In this blog, we shall leverage the azure cosmos DB for postgresql which extends PostgreSQl with more enhanced capabilities. It will be a step-by-step guide on how to leverage this service which we are being provided and managed on Azure.12KViews1like2CommentsConnect Azure Cosmos DB for PostgreSQL to your ASP.NET Core application.
You’re a software developer tasked with creating robust backend web applications for your team. You’re always on the lookout for tools that can enhance performance, scalability, and ease of use. Azure Cosmos DB for PostgreSQL is a powerful, globally distributed database service that seamlessly integrates with your SDKs. In this blog, we’ll explore how to connect Azure Cosmos DB for PostgreSQL to your ASP.NET Core application, unlocking new levels of efficiency and reliability for your projects.1.7KViews1like0Comments