This is a very myopic view of self-managed databases. With docker and k8s you can easily build an image that has HA, replication, backups, dev self-service and security baked into it. Using docker, k8s, Patroni, and gitlab my small team of 7 DataOps members has quickly and easily built out a DaaS platform for our devs that allows them instantly provision a database which backups streams wal to S3 and takes nightly backups. We will also spin up pgpool automatically via gitlab for out of the bo horizontal scaling.
It does take knowledge of docker and k8s but that's something ever DevOps engineer should already have. Also, the DBA landscape is quickly evolving and they should be learning how to implement docker and k8s as well. I would argue that having that knowledge and a deep understanding of our data layer gives you much more flexibility and freedom in how you wish to maintain your databases while still giving quick automated templates via docker and k8s tooling like kustomize.
While having any database be push button on a particular cloud can seem nice, it also further locks you into that cloud provider. I am able to lift and shift my whole data stack from AWS to GCP or Azure if I want. That includes using amazing Postgres extension like citus or timescaledb which may not be native to cloud provider you are currently on.
In short, fully managed may seem appealing at first but it will quickly turn into the Oracle or SQL Server of the cloud. It may make sense in certain circumstances, such as being a new startup with only a small team of devs that just need a quick datastore for their app. But in the long run you are much better off gaining the knowledge to manage an open source database in a cloud native setup that gives you the ultimate flexibility to run your database how and where you want.