En poursuivant votre navigation sur ce site, vous acceptez l'utilisation de cookies. Ces derniers assurent le bon fonctionnement de nos services. En savoir plus.
Coalesce will just do a reorganization on the existing data, in place and leave any fully free blocks it created belonging to the index - it does not release it, it will not cause the index to have to reallocate it.
Pour récupérer l'espace libre d'une table, on peut utiliser :
TRUNCATE table_toto DROP STORAGE
ou
ALTER TABLE table_toto DEALLOCATE UNUSED
Cette méthode est celle utilisée par Percona :
1) Connect to MySQL and run FLUSH TABLES WITH READ LOCK
Note – this command may take a while to complete if you have long running queries. The catch here is FLUSH TABLES WITH READ LOCK actually waits for all statements to complete, even selects. So be careful if you have any long running queries. If you’re using only Innodb tables and do not need to synchronize binary log position with backup you can skip this step.
2) While holding connection open run: lvcreate -L16G -s -n dbbackup /dev/Main/Data – This will create snapshot nameddbbackup for Logical Volume Main/Data . You should specify enough of undo space to hold modifications during backup process – I’ve specified 16GB in this case. If your undo size is not large enough snapshot will get invalidated and backup will be aborted.
Sometimes you might run into the errors on this step, The most common one I’ve resently seen is: snapshot: Required device-mapper target(s) not detected in your kernel – This means snapshot module is not loaded in your kernel by default and you need to load it, which is done by running modprobe dm-snapshot
3) Now you have created logical volume and can unlock the tables, but before that you should probably record binary log position which is done by running SHOW MASTER STATUS – This is binary log position you’ll need to point your MySQL Slaves created from this snapshot.
4) Snapshot created, now you want to let MySQL Server to continue, which is done by running UNLOCK TABLES or simply closing connection.
5) Mount backup Filesystem: mount /dev/Main/dbbackup /mnt/backup
6) Copy data to backup. Normally you can skip slow query logs and error log while taking backup. You also can skip most of binary logs – however if some of your slaves are far behind you might want to keep some of last binary logs just in case, or you can assume in case of recovery from the backup you will need to restore slaves as well and skip binary logs in your backup process.
7) Unmount filesystem umount /mnt/backup
8) Remove snapshot: lvremove -f /dev/Main/dbbackup
If you want to create slave based on such snapshot you need to perform couple of more simple steps
9) Extract/Copy database to the slave database directory.
10) Start MySQL Server. Wait for it to perform recovery.
11) Use CHANGE MASTER TO to point slave to saved binary log position:
1
|
change master to master_host="master", master_user="user", master_password="password", master_log_file="host-bin.000335", master_log_pos=401934686;
|
12) Run SLAVE START to restart replication.
With slightly modified process you can clone slaves from the slaves without stopping them – you just need to use SHOW SLAVE STATUS instead of SHOW MASTER STATUS to find out appropriate binary log position. Be careful however – cloning slave from the slave also clones inconsistences in data which slave could have accomulated – especially if you useslave_skip_errors or sql_slave_skip_counter. Cloning master you’re starting from consistent copy.
https://www.percona.com/blog/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/
http://www.histoireebook.com/public/ebook2/Koechlin_De_Bizemont_Dorothee-Marguerite_-_L_Univers_d_Edgar_Cayce_Tome_2.zip
A télécharger
$ psql -qAt -d mydatabase -c "SELECT 'ALTER TABLE '||schemaname||'.'||tablename||' OWNER TO new_owner;' FROM pg_tables WHERE schemaname = 'myschema'" > data.txt
$ psql < data.txt -d mydatabase
source : http://penningpence.blogspot.fr/2014/09/changing-owner-of-multiple-database.html
Installer pgbouncer , configurer le .ini :
cat /etc/pgbouncer/pgbouncer.ini
[databases]
<aliasBDD> = dbname=test user=pg_test port=5433 pool_size=48
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
auth_file = /etc/pgbouncer/userlist.txt
listen_addr = <hostname>-pgb
listen_port = 5432
auth_type = trust
admin_users = pgbouncer
stats_users = monitor
pool_mode = transaction
#server_reset_query =
server_reset_query =
max_client_conn = 1800
default_pool_size = 20
unix_socket_dir = /tmp
ignore_startup_parameters = extra_float_digits, application_name
server_check_query = select 1
server_check_delay = 10
log_connections = 1
log_disconnections = 1
log_pooler_errors = 1
Se connecter au pgbouncer :
su - pgbouncer
psql
show databases : affiche les databases liés
A quoi ça sert de faire un explain sur une requete lente?
Alors pour info dans une requête quand tu ne mets pas les simples quotes sur les types varchar, et ben
=> MySQL n'utilise pas l'index.
Bouuuhh