Tag Archives: MCslp Coalface

MySQL Documentation Myths

There are a few myths surrounding the MySQL documentation and how it works, and I thought I’d try and dispel some of those myths if I can. If you have any more questions or misunderstandings you want clarified, let me know.

Myth:

MySQL Documentation is written by the developers.

Reality

MySQL Documentation is written a dedicated team of writers with help and input from the developers. There are four main writers, Paul DuBois, Tony Bedford, Jon Stephens, and MC Brown (me!), plus our Team Lead, Stefan Hinz.

All the documentation staff are employed full time for the sole purpose of writing documentation. Sure, some of us get involved in other things too, but that’s basically the nature of the job. Some of us simply cannot help ourselves.

Myth

Docs team members are just writers and have no technical expertise.

Reality

It’s tempting to come back with a rude response to this one, but it is a comment I heard from someone at a conference. The reality is that all of us have some technical background, unsurprisingly often with MySQL. Some of us have expertise elsewhere too. Speaking only for myself, go look at MCslp.com for more info. If you want details, feel free to ask, but know that this myth is definitely busted.

Myth

The documentation is updated very rarely.

Reality

Our main tree, mysqldoc, is publicly available, and if you want to go view the commits to that tree, please feel free. It doesn’t take much to see that we commit to that tree all day, and every time we change something, the documentation gets rebuilt. How frequently? Well, on a typical day we will generate 10-15 new versions of each reference manual. It’s actually difficult to rebuild more frequently than that due to the sheer size of the documentation.

If you want to check the build date of the documentation, check the intro/preface of each document. The build and build date information is included there.

Myth

The MySQL documentation is small and unused.

Reality

You’d be amazed how many people need to be told RTFM, but a surprising number of people who criticize the MySQL documentation have actually never read it, or, they looked at it years ago and haven’t bothered to look recently because they couldn’t find what they were looking for before.

The reality is that our documentation is over 2000 pages per reference manual, which means over 10,000 pages now just for MySQL. There are hundreds more pages on the GUI tools, Workbench, Cluster/NDBAPI, and the Enterprise Monitor.

As to the popularity, the hits to the online pages of our manual exceed the hits to every other section of the MySQL website by a significant factor. For the downloadable formats we get an average of 200,000 downloads in all the various formats each month, with occasional spikes up to 800,000. For the online manuals, the documentation pages make up about 45% of all the traffic on mysql.com. Or to say it another way, we account for almost half of all the web traffic that MySQL receives, including downloads.

In short, we have no shortage of interested readers.

Myth

Docs team don’t read comments

Reality

Actually, we all get an email each time you post a comment and all of us will read it, determine whether it is suitable, useful, or (occasionally) spam, and either ignore it, delete it, or comment on it accordingly. Often that will happen within minutes of your leaving the comment. If there’s a non-standard reply to your comment, you’ll get that too.

Now, we are aware that the comments system has it’s faults. For one, we have one comment system for all the different versions of the manual, which means comments can be confusing and even misleading. We’re fixing that. We’re also trying to address the problem that some comments are really tips, while others are just plain comments and observations.

Any other comments or criticisms, let us know. We may not be unaware of the problem, but if we know your pain we can do something about it.

Myth

Docs team don’t accept bugs or corrections

Reality

You can report a bug or correction to us using the standard bugs.mysql.com, or drop us an email to docs@mysql.com.

Myth

Docs are ‘closed source’

Reality

The docs are not closed source – you can download the DocBook XML and the files and tools required to build them (well, beyond the XML parsers, Perl, and other bits and pieces). You can get hold of the repos (via SVN), on the Tech Resources page.

That said, we don’t allow anybody to commit changes, but see the response above for information on how to provide changes and fixes. Again this is something we are working to improve on.

Myth

MySQL Documentation is not distributable

Reality

This mostly comes out of the fuss around Debian dropping the man pages from their MySQL distributions You can see the description why here: MySQL Documentation and Debian/Ubuntu.

The short answer is that it is a mis-understanding in our license for the documentation, which is not released under the same license as MySQL. You can provide documentation if you provide MySQL, but not on it’s own. The reason for this is that our documentation is updated so regularly that we want to ensure that we only get genuine, up to date, versions of our documentation out there. Trust me, do a search for MySQL and some term and you will find versions of the manual that are months, or even years out of date, which is no help to anybody.

It’s about trying to make our documentation readable and usable and not misleading.


OK that’s enough myths busted today, but if you hear any more, or just have additional questions, feel free to ask.

DimDim and MySQL University

Stop the press! My boss, Stefan Hinz, has just started blogging, with his first post here: Using NetBeans with MySQL.

So who is he? Well, Stefan is the guy that keeps the rest of us in the docs team in check and makes sure we do what we’re asked, when we’re are asked and that all of the machinery, legalities and management tasks happen in the background. Without him we really couldn’t function as effectively as we do.

It’s wonderful to see some other Docs team members getting in on the act (to be fair to the rest of the team, Jon is also a blogger). We are all writers, you would think the blogging would come as a natural extension.

Behind the tease is the simple fact that the improved system for MySQL University I was talking about is getting a trial run this week.

We’ve been trying out Dimdim for web conferencing with David van Couvering and I have to say I’m pretty impressed.

We chatted, we tried out screen-sharing, presentations and the whiteboard functionality and it all worked really nicely.

We’re going to be using it for the MySQL University session this week, Using MySQL with NetBeans. Space will be limited, but feel free to join us if you can.

Using the MySQL Doc source tree

I’ve mentioned a number of times that the documentation repositories that we use to build the docs are freely available, and so they are, but how do you go about using them?

More and more people are getting interested in being able to work with the MySQL docs, judging by the queries we get, and internally we sometimes get specialized requests.

There are some limitations – although you can download and access the docs and generate your own versions in various formats, you are not allowed to distribute or supply that iinformation, it can only be employed for personal use. The reasons and disclaimer for that are available on the main page for each of the docs, such as the one on the 5.1 Manual.

Those issues aside, if you want to use and generate your own docs from the Subversion source tree then you’ll need the following:

  • Subversion to download the sources
  • XML processors to convert the DocBook XML into various target formats; we include DocBook XML/XSLT files you’ll need.
  • Perl for some of the checking scripts and the ID mapping parts of the build process
  • Apache’s FOP if you want to generate PDFs, if not, you can ignore.

To get you started, you must download the DocBook XML source from the public subversion repository. We recently split a single Subversion tree with the English language version into two different repositories, one containing the pure content, and the other the tools that required to build the docs. The reason for that is consistency across all of our repositories, internally and externally, for the reference manual in all its different versions.

Therefore, to get started, you need both repositories. You need check them out into the same directory:

$ svn checkout http://svn.mysql.com/svnpublic/mysqldoc
$ svn checkout http://svn.mysql.com/svnpublic/mysqldoc-toolset

Assuming you have the downloaded the XML toolkit already, make sure you have the necessary Perl modules installed. You’ll need Expat library, and the following Perl modules:

  • Digest::MD5
  • XML::Parser::PerlSAX
  • IO::File
  • IO::String

If you have CPAN installed, you can install them automatically using perl -MCPAN -e 'install modulename', or use your respective package management system to install the modules for you. You’ll get an error message if there is something missing.

OK, with everything in place you are ready to try building the documentation. You can change into most directories and convert the XML files there into a final document. For example, to build the Workbench documentation, change into the Workbench directory. We use make to build the various files and dependencies.

To build the full Workbench documentation, specify the main file, workbench, as the target, and the file format you want to produce as the extension. For example, to build a single HTML file, the extension is html. I’ve included the full output here so that you can see the exact output you will get:

make workbench.html
set -e; \
../../mysqldoc-toolset/tools/dynxml-parser.pl \
--infile=news-workbench-core.xml --outfile=dynxml-local-news-workbench.xml-tmp-$$ --srcdir=../dynamic-docs --srclangdir=../dynamic-docs; \
mv dynxml-local-news-workbench.xml-tmp-$$ dynxml-local-news-workbench.xml
make -C ../refman-5.1 metadata/introduction.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en introduction.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
make -C ../refman-5.1 metadata/partitioning.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en partitioning.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
make -C ../refman-5.1 metadata/se-merge.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en se-merge.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
make -C ../refman-5.1 metadata/se-myisam-core.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en se-myisam-core.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
make -C ../refman-5.1 metadata/sql-syntax-data-definition.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en sql-syntax-data-definition.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
make -C ../workbench metadata/documenting-database.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en documenting-database.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/foreign-key-relationships.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en foreign-key-relationships.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/forward-engineering.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en forward-engineering.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/grt-shell.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en grt-shell.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/images.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en images.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/installing.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en installing.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/layers.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en layers.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/notes.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en notes.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/plugins.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en plugins.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/printing.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en printing.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/reference.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en reference.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/reverse-engineering.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en reverse-engineering.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/server-connection-wizard.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en server-connection-wizard.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/stored-procedures.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en stored-procedures.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/tables.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en tables.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/text-objects.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en text-objects.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/tutorial.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en tutorial.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/validation-plugins.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en validation-plugins.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/views.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en views.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
XML_CATALOG_FILES=”../../mysqldoc-toolset//catalog.xml” xsltproc –xinclude –novalid \
–stringparam repository.revision “`../../mysqldoc-toolset/tools/get-svn-revision`” \
–param map.remark.to.para 0 \
–stringparam qandaset.style “” \
../../mysqldoc-toolset/xsl.d/dbk-prep.xsl workbench.xml > workbench-prepped.xml.tmp2
../../mysqldoc-toolset/tools/bug-prep.pl workbench-prepped.xml.tmp
../../mysqldoc-toolset/tools/idremap.pl –srcpath=”../workbench ../gui-common ../refman-5.1 ../refman-common ../refman-5.0″ –prefix=”workbench-” workbench-prepped.xml.tmp > workbench-prepped.xml.tmp2
mv workbench-prepped.xml.tmp2 workbench-prepped.xml
rm -f workbench-prepped.xml.tmp
XML_CATALOG_FILES=”../../mysqldoc-toolset//catalog.xml” xsltproc –xinclude –novalid \
–stringparam l10n.gentext.default.language en \
–output workbench.html-tmp \
../../mysqldoc-toolset/xsl.d/mysql-html.xsl \
workbench-prepped.xml
../../mysqldoc-toolset/tools/add-index-navlinks.pl workbench.html-tmp
mv workbench.html-tmp workbench.html

There’s lots in the output above, and I’ll describe the content as best I can without going in to too much detail in this piece.

First off, the make triggers some dependencies, which are the creation of a number of ‘IDMap’ files. These files contain information about the content of the files and are used to help produce valid links in to other parts of the documentation. I’ll talk about ID mapping more in a later post.

The next stage is to build the ‘prepped’ version of the documentation, which combines all of the individual files into one large file and does some pre-processing to ensure that we get the output that we want.

The next is the remapping. This uses the IDMap information built in the first stage and ensures that any links in the documentation that go to a document we know about, like the reference manual, point to the correct online location. It is the ID mapping (and remapping) that allows us to effectively link between documents (such as the Workbench and Refman) without us having to worry about creating a complex URL link. Instead, we just include a link to the correct ID within the other document and let the ID mapping system do the rest.

The final stage takes our prepped, remapped, DocBook XML source and converts it into the final XML using the standard DocBook XSL templates.

One of the benefits of us using make is that because we build different stages in the build process, when we build another target, we dont have to repeat the full process. For example, to build a PDF version of the same document, the prepping, remapping and other stages are fundamentally the same, which is why we keep the file, workbench-prepped.xml. Building the PDF only requires us to build the FO (Formatting Objects) output, and then use fop to turn this into PDF:

$ make workbench.pdf
XML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid \
--output - ../../mysqldoc-toolset/xsl.d/strip-remarks.xsl workbench-prepped.xml \
| XML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid \
--stringparam l10n.gentext.default.language en \
\
--output workbench.fo-tmp ../../mysqldoc-toolset/xsl.d/mysql-fo.xsl -
Making portrait pages on USletter paper (8.5inx11in)
mv workbench.fo-tmp workbench.fo
set -e; \
if [ -f ../../mysqldoc-toolset/xsl.d/userconfig.xml ]; then \
../../mysqldoc-toolset/tools/fixup-multibyte.pl workbench.fo workbench.fo.multibyte; \
mv workbench.fo.multibyte workbench.fo; \
fop -q -c ../../mysqldoc-toolset/xsl.d/userconfig.xml workbench.fo workbench.pdf-tmp > workbench.pdf-err; \
else \
fop -q workbench.fo workbench.pdf-tmp > workbench.pdf-err; \
fi
mv workbench.pdf-tmp workbench.pdf
sed -e ‘/hyphenation/d’ < workbench.pdf-err
[ERROR] Areas pending, text probably lost in lineWhen synchronizing the database, table comments were not updated. However, column comments worked as expected.
rm -f workbench.pdf-err

You can see in this output that the prepping and remapping processes don’t even take place – the process immediately converts the prepped file into FO and then calls fop.

That completes our whirlwind tour of the basics of building MySQL documentation, I’ll look at some more detailed aspects of the process in future blog posts. Until then, you might want to read our metadocs on the internals in MySQL Guide to MySQL Documentation.

MySQL on Solaris at the MySQL European Customer Conference

I’m speaking at the MySQL European Customer Conference this week (Thursday, 23rd), on the topic of the best deployment practices for using MySQL on Solaris.

I’ll be covering a number of topics, including:

  • Overview of MySQL availability on Solaris
  • General tips for MySQL on Solaris
  • MySQL on ZFS
  • DTrace and the new DTrace Probes
  • Using MySQL with containers and zones
  • Using Sun Cluster and MySQL Cluster for HA

Some of the material I’ve already covered before (see my presentation at the London Solaris User’s Group, but most of the content will be new and more focused than the top level LOSUG presentation.

There are similar presentations being presented at the Paris and Munich conferences by Eric Bezille and Franz Haberhauer, and we’re all presenting the same basic content as we’ve been working together on the presentation.

If you are in the region and can make it to the conference, I suggest you come. Not just for my presentation, there are other topics including performance tuning, HA, MySQL Proxy and using MySQL and memcached.

Replicating multiple masters to one slave

As standard, MySQL allows replication from one master to multiple slaves, and that is a common scale-out scenario, but there have been a few comments recently, and some longer standing queries about having a setup that works the other way round, that is, multiple slaves replicating into a single master.

This a common enough scenario in data logging systems, where the data is collected locally and then distributed up to a central database, or in EPOS (Electronic Point of Sale) systems where you want the transactions logs from the tills logged up to the database at head office. There are many other situations where you want that merging of information.

Although MySQL doesn’t support what is called ‘multiple master, single slave’ solution, you can simulate the general approach by using a combination of replication and federated tables.

Replication allows for different table types on the master (the source of the data) and the slave. There are many advantages to this, for example, using InnoDB on the master to take advantage of transactions, while using MyISAM on the slave for read performance.

Federation allows you to access the tables of a remote server as if it were a local table. You can set up a federated table to access a remote table from as many machines as you like. That means that you can have two, or more, MySQL instances set up to use the remote table using the federated engine. You can execute any queries you like on the remote table, but you need to take care when using multiple hosts to access the remote table. Particularly when doing INSERT from multiple hosts, using InnoDB, Falcon, Maria or another table that supports multiple writers can be a good idea, although I’ll cover some workarounds for that later.

Using federated gives us the ability to write to the same table from multiple hosts, but you dont want to read and write from the same remote table all the time, especially if on your local machine (your till, or data collector) you want to be able to run your own queries.

This is where the replication fits in, if you set up replication from the master to another instance of MySQL, let’s call it ‘Fed Slave’ (which works both ways). On the Fed Slave, you configure the table or tables that you want to merge on the final ‘Slave’ machine to be federated tables. What happens is that data is replicated from the master to the Fed Slave, and on Fed Slave the queries are sent to the Merge Slave via federation. You can probably see this more clearly in the figure below.

To re-iterate:

  1. INSERT on Master 1 is replicated to Fed Slave 1
  2. Fed Slave 1 executes the INSERT on a Federated table which points to Merge Slave
  3. Merge Slave executes the federated statement on its local table

Each Fed Slave is relatively lightweight – all it’s doing is executing a statement and sending the statement over the network to the Merge Slave, so you could run it on the same machine as Master 1.

There are few problems with this design:

  1. Updating the same federated table from multiple hosts can get messy. There are a few ways you can get get round this, one is to stop the query execution on the slaves and only allow them to run during a set period of time. For example, let Fed Slave 1 execute the queries in the log from 1am to 2am, and Fed Slave 2 from 2am to 3am, and so on.
  2. Federation doesn’t get round the problems of duplicate IDs – if you try to run a statement on a federated table that inserts a duplicate ID it will fail just as will locally. You can get round this by making sure that the tables that hold the merge data on your Merge Slave dont have unique ID constraints, and that your Masters and all the table definitions contain a field to identify the source of the data in each case.
  3. Load can be an issue. One of the reasons I suggested InnoDB/Falcon/Maria is to help get round the multiple-insert and locking that is normally applied, but the very nature of the system means that locks and delays might still occur. You can’t eliminate it, but you can ease it.

I’ve tried and used this method in a number of situations, actually not for the reasons given above, but for performance logging from multiple hosts onto one. I’ll be honest and say that I’ve never seen a problem, but, at the same time, the type of data that I am collecting means that I would have been unlikely to notice a missing data point or two.

MySQL University: Checking Threading and Locking With Helgrind

This Thursday, Stewart Smith will give a MySQL University session:

Checking Threading and Locking With Helgrind

Note that this particular session starts 9:00 BST / 10:00 CET /
18:00 Brisbane/Melbourne

Stewart is always enjoyable to listen to, both because he knows his stuff and because he is a really fun guy (heads up for the MySQL Conference 09, the Monty Taylor/Stewart Smith double act at this years conference was one of the most interesting and information sessions I went to).

Please register for this session by filling in your name on the session
Wiki page. Registering is not required but appreciated. That Wiki page
also contains a section to post questions. Please use it!

MySQL University sessions normally start at 13:00 UTC (summer) or 14:00
UTC (winter); see: MySQL University for more time zone information.

Those planning to attend a MySQL University session for the very first
time should probably read the instructions for attendees,
Instructions for Attendees.

See Upcoming Sessions for the complete list of upcoming University sessions.

How to analyze memory leaks on Windows

We use valgrind to find memory leaks in MySQL on Linux. The tool is a convenient, and often enlightening way of finding out where the real and potential problems are location.

On Windows, you dont have valgrind, but Microsoft do provide a free native debugging tool, called the user-mode dump heap (UMDH) tool. This performs a similar function to valgrind to determine memory leaks.

Vladislav Vaintroub, who works on the Falcon team and is one of our resident Windows experts provides the following how-to for using UMDH:

  1. Download and install debugging tools for Windows from here
    MS Debugging Tools
    Install 64 bit version if you’re on 64 bit Windows and 32 bit version
    otherwise.

  2. Change the PATH environment variable to include bin directory of Debugging tools.
    On my system, I added
    C:\Program Files\Debugging Tools for Windows 64-bit to the PATH.

  3. Instruct OS to collect allocation stack for mysqld with gflags -i
    mysqld.exe +ust
    .
    On Vista and later, this should be done in “elevated” command prompt,
    it requires admin privileges.

    Now collect the leak information. The mode of operation is that: take the
    heap snapshot once, and after some load take it once again. Compare
    snapshots and output leak info.

  4. Preparation : setup debug symbol path.
    In the command prompt window, do

    set _NT_SYMBOL_PATH= srv*C:\websymbols*http://msdl.microsoft.com/download/symbols;G:\bzr\mysql-6.0\sql\Debug

    Adjust second path component for your needs, it should include directory
    where mysqld.exe is.

  5. Start mysqld and run it for some minutes
  6. Take first heap snapshot

    umdh -p:6768 -f:dump1

    Where -p:
    actually, PID of my mysqld was 6768.

  7. Let mysqld run for another some minutes
  8. Take second heap snapshot

    umdh -p:6768 -f:dump2

  9. Compare snapshots

    umdh -v dump1 dump2 > dump.compare.txt

  10. Examine the result output file. It is human readable, but all numbers are
    in hex, to scare everyone except geeks.
  11. gflags -i mysqld.exe -ust

    Instruct OS not to collect mysqld user mode stacks for allocations
    anymore.

These are 10 steps and it sounds like much work, but in reality it takes 15
minutes first time you do it and 5 minutes next time.

Additional information is given in Microsoft KB article about UMDH
KB 268343.

Book: Intellectual Property and Open Source – the solution to IANAL

I’m reading Intellectual Property and Open Source by Van Lindberg at the moment, and despite being about a relatively dry topic, I must admit that it’s a fascinating read.

Van Lindberg introduces the book by talking about the comments that end up on Slashdot.org, almost certainly prefixed by the expression IANAL (I Am Not A Lawyer) where people defend, discuss, and rip people up about the legalities of open source and the various licenses. Van Lindberg also talks about how he spends much of his time translating the contents of various legal documents into engineer speak and back again.

Despite being a proponent and long time user of free software and open source for the best part of my working life, I’ll admit to being completely ignorant of many of the issues. This isn’t through lack of interest, but I’d rather leave those discussions and decisions to people who know, and it’s clear that Van Lindberg not only knows the subject, but he also knows how to make it interesting to those of us who actually have to work within the confines of rules and regulations.

I’m still reading and learning a lot of the ins and outs of copyright, company agreements, and individual licenses and details. There’s a lot of material and detail included here.

I’ll have a full review when I’ve finished. Until then, if you have even a passing interest in the various licensing, legal and IP issues with open source, check out the book for a proper read.

MySQL University – quick survey

MySQL University has been running for the last 18 months, and we’ve covered a wide range of topics, from the internals of MySQL right up to Amazon’s EC2, using MySQL in the Solaris/OpenSolaris Webstack and a description of the forthcoming MySQL Online Backup.

Personally, I think they’re great. Obviously many times I am scribe and am there for the sessions, but I listen to lots of the sessions anyway, and I’m yet to be disappointed by the content. What’s really great is that in all the cases the person you are listening to is probably the person that either developed, or helped drive development of the particular function, or, in the case of some of the external tools (EC2, for example), these guys are expert in it. The experience is not quite as thrilling as attending the MySQL User Conference, but the content is just the same.

The problem is that despite all the work we do to get the presenters, interesting topics, and promotion of the upcoming sessions, we don’t always get as many attendees as we want or expect.

So, I’m wondering why this should be the case. We know that the current presentation system is not ideal (and we’re working on that), but I’m interested to hear people’s opinions on MySQL University. If you want to help shape the future of MySQL University, then comment here, and either answer the questions below, or make up your own.

  • Have you attended any MySQL University sessions. How many?
  • How would you rate the sessions generally? A simple good or bad will do
  • If you haven’t attended any sessions, or don’t regularly attend them, why not?
  • Have you ever looked at/listened to the past sessions that provide on MySQL Forge?

Please, I’m interested to hear.