Posts Tagged ‘MySQL’

MySQL and MariaDB innodb_file_per_table

For those of you pushing your MySQL instances to higher levels, you’ll have realized at some point that all InnoDB tables share their indexing space into a single file (ibdata1, usually).

This has several very bad outcomes:

  • the ibdata1 file becomes huge after some time (we’ve seen it pass the 8GB bar)
  • starting the MySQL server takes longer
  • deleting an InnoDB table *doesn’t* reduce the space used by ibdata1
  • you have to shutdown your server, remove ibdata1 completely, then restart the server in order to reinitialize the ibdata1 file to a 0-size file (but this is, of course, difficult or dangerous to do in a high-availability context)

I just found there was a setting called “innodb_file_per_table” since MySQL 5.1 at least:

There is an explanation on what the advantages and disadvantages of the method are (but as you might have guessed, the main disadvantage of using separate files is that you have more files, taking a little more time to do I/O operations when you scan them all – and of course retro-compatibility with previous versions of MySQL)

This seems to be available in MariaDB as well.

El mejor valor para max_connections en MySQL

January 13, 2014 Leave a comment

Según el artículo, no hay un “mejor valor”, y mucho depende de la aplicación detrás, pero dicen que en máquinas con ~16GB de RAM, puede estar alrededor de 1000 sin mucho problema.

Para más, ya habría que empezar a pensar en un thread_pool.

Ver el artículo original para mayores detalles, y el artículo anterior en este blog para saber como cambiar este valor en vivo (a parte de cambiarlo en la configuración para que quede más allá de un reboot).

Vim regexp: transforming multiple SQL inserts into a big one

If you ever face a very slow MySQL process based on a very long insert file and you want to optimize it by unifying a lot of queries, you can do something like the following.

Imagine you have a lot of these:

INSERT INTO `employee` (`cod_modular`, `sequential_pago`, `a_pater`, `a_mater`, `names`, `type_id_doc`, `document_number`, `cod_dre`, `desc_dre`, `cod_cgef`, `desc_cgef`, `nec`, `level`, `cod_ie`, `name_ei`, `esc_magisterial`, `cod_charge`, `desc_charge`, `situation`, `dias_licence`, `period`, `planner`) VALUES ('1991119073', 283001, 'PHILIPS', 'Castle', 'BETTY', 'DNI', '01712532', '21', 'SAN MARTIN', 'DX', 'CGEF A', '02', '3', 'DX023159', 'SCHOOL A', '0', '4006', 'PROFESSOR', 'Enabled', 0, 'May 2013', 'Active');
INSERT INTO `employee` (`cod_modular`, `sequential_pago`, `a_pater`, `a_mater`, `names`, `type_id_doc`, `document_number`, `cod_dre`, `desc_dre`, `cod_cgef`, `desc_cgef`, `nec`, `level`, `cod_ie`, `name_ei`, `esc_magisterial`, `cod_charge`, `desc_charge`, `situation`, `dias_licence`, `period`, `planner`) VALUES ('1991119084', 300003, 'MONTH', 'Roof', 'ESTHER', 'DNI', '01712482', '6', 'UCAYALI', 'EI', 'CGEF B', '02', '2', 'EI022330', 'SCHOOL B', '1', '4006', 'PROFESSOR', 'Enabled', 0, 'May 2013', 'Active');

And you want it rather like this:

INSERT INTO `employee` 
(`cod_modular`, `sequential_pago`, `a_pater`, `a_mater`, `names`, `type_id_doc`, `document_number`, `cod_dre`, `desc_dre`, `cod_cgef`, `desc_cgef`, `nec`, `level`, `cod_ie`, `name_ei`, `esc_magisterial`, `cod_charge`, `desc_charge`, `situation`, `dias_licence`, `period`, `planner`) 
('1991119073', 283001, 'PHILIPS', 'Castle', 'BETTY', 'DNI', '01712532', '21', 'SAN MARTIN', 'DX', 'CGEF A', '02', '3', 'DX023159', 'SCHOOL A', '0', '4006', 'PROFESSOR', 'Enabled', 0, 'May 2013', 'Active'),
('1991119084', 300003, 'MONTH', 'Roof', 'ESTHER', 'DNI', '01712482', '6', 'UCAYALI', 'EI', 'CGEF B', '02', '2', 'EI022330', 'SCHOOL B', '1', '4006', 'PROFESSOR', 'Enabled', 0, 'May 2013', 'Active');

And obviously you’d prefer to have like 250 of these inserts grouped together…

Well, here are good news: you can do it in a command (although you have to repeat it a few times) with VIM. Let’s assume that the fixed query part (the fields names, from the first backtick, to the end of the VALUES word) is actually 305 characters long (you’ll have to check that the first time, but the good news is it’s fixed). What you want to do is select the current line (do that with the “v” key), then issue a command that will replace the last character of the first line, then all the static SQL string by just a comma, and that you want to do this 250 times… You would then do this :

:'<,'>250s/;\nINSERT INTO .\{305\}/,/

This should work out of the box. If you have another number of chars for the static string, change the 305 correspondingly. If you want to do this more or less than 250 times (more is not advised as this might be a very heavy action for your VIM process), just change 250 to whathever you want.

If you have a few groups like this to do, just press the down key, the : key and the up key to get the same action executed twice in a row.

If you have to repeat that a hundred time, then you might want to look for another selector than ‘<,’>.

Categories: English, Misc, Techie Tags: ,

Profiling MySQL/MariaDB queries

MariaDB logoIf you ever face an optimization issue in MySQL or MariaDB and want to know how to measure/benchmark the differences in execution between two queries, you should definitely know about the set profiling=1; command.

Here is how it works:

mysql> set profiling=1;
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT count(login_user_id)  FROM stats.track_e_online WHERE DATE_ADD(login_date, INTERVAL 60 MINUTE) >= ‘2012-06-11 11:56:20’;
| count(login_user_id) |
|                   65 |
1 row in set (0.00 sec)
mysql> show profile;
| Status                         | Duration |
| starting                       | 0.000022 |
| checking query cache for query | 0.000055 |
| checking permissions           | 0.000010 |
| Opening tables                 | 0.000012 |
| System lock                    | 0.000005 |
| Table lock                     | 0.000023 |
| init                           | 0.000027 |
| optimizing                     | 0.000010 |
| statistics                     | 0.000008 |
| preparing                      | 0.000009 |
| executing                      | 0.000004 |
| Sending data                   | 0.001994 |
| end                            | 0.000006 |
| query end                      | 0.000004 |
| freeing items                  | 0.000014 |
| storing result in query cache  | 0.000227 |
| logging slow query             | 0.000004 |
| logging slow query             | 0.000028 |
| cleaning up                    | 0.000003 |
19 rows in set (0.01 sec)

mysql> SELECT count(login_user_id)  FROM stats.track_e_online WHERE ‘2012-06-11 11:56:20’  <= DATE_ADD(login_date, INTERVAL 60 MINUTE);
| count(login_user_id) |
|                   65 |
1 row in set (0.00 sec)

mysql> show profile;
| Status                         | Duration |
| starting                       | 0.000032 |
| checking query cache for query | 0.000083 |
| checking permissions           | 0.000014 |
| Opening tables                 | 0.000011 |
| System lock                    | 0.000005 |
| Table lock                     | 0.000031 |
| init                           | 0.000026 |
| optimizing                     | 0.000009 |
| statistics                     | 0.000008 |
| preparing                      | 0.000009 |
| executing                      | 0.000005 |
| Sending data                   | 0.001989 |
| end                            | 0.000007 |
| query end                      | 0.000003 |
| freeing items                  | 0.000015 |
| storing result in query cache  | 0.000340 |
| logging slow query             | 0.000005 |
| logging slow query             | 0.000030 |
| cleaning up                    | 0.000003 |
19 rows in set (0.00 sec)
mysql> set profiling=0;
Query OK, 0 rows affected (0.00 sec)

Well, granted, there isn’t much of a difference between those two examples. I believe that’s because these queries have been stored in the query cache already, but you get the idea…

Note that profiling is a session variable, which means it only acts in your session of the MySQL/MariaDB client. You can find more information about profiling in the MySQL documentation pages.

Of course, this also works in the great MariaDB server.

Important meetings

October 30, 2011 Leave a comment

Sometimes I like to treat this blog as a personal one and write down things that might affect in fundamental ways our future… or at least my own future, potentially.

Anyway, I’m currently living in Peru and had a series of meeting over the last two months with the new Electronical Government Office (ONGEI). Can’t say much for now (don’t want to spoil it) but I was in a meeting with the Prime Minister who seemed to agree with my proposal (in the big lines) and told me “you’re not Peruvian, are you?” in a kind of funny way (I managed to get the Peruvian accent so much that my Spanish friends make fun of me now). I’d also like to think I’ve been able to bend a little the shape of the future of the One Laptop Per Child project in Peru (on the side of Good) through my contact with the office’s director, but that was mostly a side-effect of what I hope is coming.

On Tuesday this week, I met with MySQL and MariaDB’s founder/creator, Michael Widenius (see picture). I didn’t really expect to like the guy, but as it turns out I felt soooo much related between the experiences of what happened on the MySQL-MariaDB side and what happened on the D0keos/Chamilo side that I ended up passing a good time. Those two historical events (for the projects themselves) happened pretty much at the same time, as well.

Yannick Warnier (Chamilo) and Michael Widenius (MariaDB) in Lima

This put me in the usual sharing mood, and I thought about Latinux. Now Latinux is mainly a Latin-American group (it’s at the same time an association and a company and an open-source software start-ups cluster) founded by my new friend (or at least close acquaintance) Ricardo Strusberg (*not* the guy on the left, that’s Santiago Gonzales), with whom I met a series of times during the last 12 months, and with whom we’re mounting the Official Chamilo Teacher and Admin Certifications (first one of them is currently in a review process and should be out before our Chamilo Users Day Peru on the 18th of November, so people will be able to come around, go to the workshop and get certified the same day).

So… coming back to the sharing mood. As it turns out, MariaDB suffers the exact same problems Chamilo does right now (and probably LibreOffice, to some extent). It is better than the software from which it moved away, it has most of the influent people in it, but there is no widespread adoption yet, because the original software kept the trademark and is still luring people into them being the best. Furthermore, the job was initially done so well, with so many people involved in the original projects, that it’s difficult to out-rank them with simple good intentions. A considerable marketing effort is necessary to make people know we’ve changed name, and that we are now better.

One of the reason I’m so fond of that certification effort is that it will definitely give a boost to our project, to have the ability to get yourself certified and prove you are a true professional. If we do it, we’re definitely moving one big step forward in terms of promotion. And so could MariaDB. So I mentioned it to Michael, who immediately agreed this was a great idea, and as I’m writing this post (which is two weeks later), Ricardo has been discussing the details of such a certification and they will be launching it at the beginning of next year all over America and France.

This specific encounter (that might shape the future of MariaDB considerably) was casually made possible by the organizers of the event, who made it possible for us to meet. So thank you guys!

I can’t believe how things can happen so casually, but one thing is sure: if all of this would have been in another, more popular context, I would already have “casually” met with a bunch of the most popular IT people on the planet (in fact, I have a few practical examples in mind involving friends of mine). So, in a way, Peru is a great place, but there are still other (great) places to be out there.

Import MySQL tables from .myd

February 28, 2011 Leave a comment

This is an easy procedure to import any MyISAM table that you would have in .myd, .myi and .frm format, into a local database. I assume you have MySQL server and a MySQL client on your local machine.

  • First, identify the name of the table (that’s the name before the “.myd” extension).
  • Connect to your local database and create a dummy database (let’s call it “import”)
  • Create a table inside that database with the exact same name as the one you need to import. Let’s say it’s called user.myd, so call the table “user”
  • Disconnect from your database and shut down your MySQL server (Ubuntu: sudo /etc/init.d/mysql stop)
  • Copy the backup files (user.*) into your MySQL data directory (Ubuntu: /var/lib/mysql/import/)
  • Restart your MySQL server (Ubuntu: sudo /etc/init.d/mysql start)
  • Enjoy the imported table.
Categories: Documentation, English Tags:

Reset MySQL root password

November 19, 2010 1 comment

Reference: (partly copied to avoid loosing source)

# /etc/init.d/mysql stop

Stopping MySQL database server: mysqld.
# mysqld_safe --skip-grant-tables &
[1] 5988
Starting mysqld daemon with databases from /var/lib/mysql
mysqld_safe[6025]: started
# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.15-Debian_1-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> use mysql;

mysql> update user set password=PASSWORD("NEW-ROOT-PASSWORD") where User='root';

mysql> flush privileges;

mysql> quit
# /etc/init.d/mysql stop

Personal note: use --skip-syslog if it fails.
%d bloggers like this: