Using the MySQL Doc source tree

I’ve mentioned a number of times that the documentation repositories that we use to build the docs are freely available, and so they are, but how do you go about using them?

More and more people are getting interested in being able to work with the MySQL docs, judging by the queries we get, and internally we sometimes get specialized requests.

There are some limitations – although you can download and access the docs and generate your own versions in various formats, you are not allowed to distribute or supply that iinformation, it can only be employed for personal use. The reasons and disclaimer for that are available on the main page for each of the docs, such as the one on the 5.1 Manual.

Those issues aside, if you want to use and generate your own docs from the Subversion source tree then you’ll need the following:

  • Subversion to download the sources
  • XML processors to convert the DocBook XML into various target formats; we include DocBook XML/XSLT files you’ll need.
  • Perl for some of the checking scripts and the ID mapping parts of the build process
  • Apache’s FOP if you want to generate PDFs, if not, you can ignore.

To get you started, you must download the DocBook XML source from the public subversion repository. We recently split a single Subversion tree with the English language version into two different repositories, one containing the pure content, and the other the tools that required to build the docs. The reason for that is consistency across all of our repositories, internally and externally, for the reference manual in all its different versions.

Therefore, to get started, you need both repositories. You need check them out into the same directory:

$ svn checkout http://svn.mysql.com/svnpublic/mysqldoc
$ svn checkout http://svn.mysql.com/svnpublic/mysqldoc-toolset

Assuming you have the downloaded the XML toolkit already, make sure you have the necessary Perl modules installed. You’ll need Expat library, and the following Perl modules:

  • Digest::MD5
  • XML::Parser::PerlSAX
  • IO::File
  • IO::String

If you have CPAN installed, you can install them automatically using perl -MCPAN -e 'install modulename', or use your respective package management system to install the modules for you. You’ll get an error message if there is something missing.

OK, with everything in place you are ready to try building the documentation. You can change into most directories and convert the XML files there into a final document. For example, to build the Workbench documentation, change into the Workbench directory. We use make to build the various files and dependencies.

To build the full Workbench documentation, specify the main file, workbench, as the target, and the file format you want to produce as the extension. For example, to build a single HTML file, the extension is html. I’ve included the full output here so that you can see the exact output you will get:

make workbench.html
set -e; \
../../mysqldoc-toolset/tools/dynxml-parser.pl \
--infile=news-workbench-core.xml --outfile=dynxml-local-news-workbench.xml-tmp-$$ --srcdir=../dynamic-docs --srclangdir=../dynamic-docs; \
mv dynxml-local-news-workbench.xml-tmp-$$ dynxml-local-news-workbench.xml
make -C ../refman-5.1 metadata/introduction.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en introduction.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
make -C ../refman-5.1 metadata/partitioning.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en partitioning.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
make -C ../refman-5.1 metadata/se-merge.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en se-merge.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
make -C ../refman-5.1 metadata/se-myisam-core.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en se-myisam-core.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
make -C ../refman-5.1 metadata/sql-syntax-data-definition.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en sql-syntax-data-definition.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1′
make -C ../workbench metadata/documenting-database.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en documenting-database.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/foreign-key-relationships.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en foreign-key-relationships.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/forward-engineering.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en forward-engineering.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/grt-shell.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en grt-shell.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/images.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en images.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/installing.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en installing.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/layers.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en layers.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/notes.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en notes.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/plugins.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en plugins.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/printing.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en printing.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/reference.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en reference.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/reverse-engineering.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en reverse-engineering.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/server-connection-wizard.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en server-connection-wizard.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/stored-procedures.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en stored-procedures.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/tables.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en tables.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/text-objects.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en text-objects.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/tutorial.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en tutorial.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/validation-plugins.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en validation-plugins.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
make -C ../workbench metadata/views.idmap
make[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
../../mysqldoc-toolset/tools/idmap.pl workbench//en views.xml
make[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench’
XML_CATALOG_FILES=”../../mysqldoc-toolset//catalog.xml” xsltproc –xinclude –novalid \
–stringparam repository.revision “`../../mysqldoc-toolset/tools/get-svn-revision`” \
–param map.remark.to.para 0 \
–stringparam qandaset.style “” \
../../mysqldoc-toolset/xsl.d/dbk-prep.xsl workbench.xml > workbench-prepped.xml.tmp2
../../mysqldoc-toolset/tools/bug-prep.pl workbench-prepped.xml.tmp
../../mysqldoc-toolset/tools/idremap.pl –srcpath=”../workbench ../gui-common ../refman-5.1 ../refman-common ../refman-5.0″ –prefix=”workbench-” workbench-prepped.xml.tmp > workbench-prepped.xml.tmp2
mv workbench-prepped.xml.tmp2 workbench-prepped.xml
rm -f workbench-prepped.xml.tmp
XML_CATALOG_FILES=”../../mysqldoc-toolset//catalog.xml” xsltproc –xinclude –novalid \
–stringparam l10n.gentext.default.language en \
–output workbench.html-tmp \
../../mysqldoc-toolset/xsl.d/mysql-html.xsl \
workbench-prepped.xml
../../mysqldoc-toolset/tools/add-index-navlinks.pl workbench.html-tmp
mv workbench.html-tmp workbench.html

There’s lots in the output above, and I’ll describe the content as best I can without going in to too much detail in this piece.

First off, the make triggers some dependencies, which are the creation of a number of ‘IDMap’ files. These files contain information about the content of the files and are used to help produce valid links in to other parts of the documentation. I’ll talk about ID mapping more in a later post.

The next stage is to build the ‘prepped’ version of the documentation, which combines all of the individual files into one large file and does some pre-processing to ensure that we get the output that we want.

The next is the remapping. This uses the IDMap information built in the first stage and ensures that any links in the documentation that go to a document we know about, like the reference manual, point to the correct online location. It is the ID mapping (and remapping) that allows us to effectively link between documents (such as the Workbench and Refman) without us having to worry about creating a complex URL link. Instead, we just include a link to the correct ID within the other document and let the ID mapping system do the rest.

The final stage takes our prepped, remapped, DocBook XML source and converts it into the final XML using the standard DocBook XSL templates.

One of the benefits of us using make is that because we build different stages in the build process, when we build another target, we dont have to repeat the full process. For example, to build a PDF version of the same document, the prepping, remapping and other stages are fundamentally the same, which is why we keep the file, workbench-prepped.xml. Building the PDF only requires us to build the FO (Formatting Objects) output, and then use fop to turn this into PDF:

$ make workbench.pdf
XML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid \
--output - ../../mysqldoc-toolset/xsl.d/strip-remarks.xsl workbench-prepped.xml \
| XML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid \
--stringparam l10n.gentext.default.language en \
\
--output workbench.fo-tmp ../../mysqldoc-toolset/xsl.d/mysql-fo.xsl -
Making portrait pages on USletter paper (8.5inx11in)
mv workbench.fo-tmp workbench.fo
set -e; \
if [ -f ../../mysqldoc-toolset/xsl.d/userconfig.xml ]; then \
../../mysqldoc-toolset/tools/fixup-multibyte.pl workbench.fo workbench.fo.multibyte; \
mv workbench.fo.multibyte workbench.fo; \
fop -q -c ../../mysqldoc-toolset/xsl.d/userconfig.xml workbench.fo workbench.pdf-tmp > workbench.pdf-err; \
else \
fop -q workbench.fo workbench.pdf-tmp > workbench.pdf-err; \
fi
mv workbench.pdf-tmp workbench.pdf
sed -e ‘/hyphenation/d’ < workbench.pdf-err
[ERROR] Areas pending, text probably lost in lineWhen synchronizing the database, table comments were not updated. However, column comments worked as expected.
rm -f workbench.pdf-err

You can see in this output that the prepping and remapping processes don’t even take place – the process immediately converts the prepped file into FO and then calls fop.

That completes our whirlwind tour of the basics of building MySQL documentation, I’ll look at some more detailed aspects of the process in future blog posts. Until then, you might want to read our metadocs on the internals in MySQL Guide to MySQL Documentation.

MySQL on Solaris at the MySQL European Customer Conference

I’m speaking at the MySQL European Customer Conference this week (Thursday, 23rd), on the topic of the best deployment practices for using MySQL on Solaris.

I’ll be covering a number of topics, including:

  • Overview of MySQL availability on Solaris
  • General tips for MySQL on Solaris
  • MySQL on ZFS
  • DTrace and the new DTrace Probes
  • Using MySQL with containers and zones
  • Using Sun Cluster and MySQL Cluster for HA

Some of the material I’ve already covered before (see my presentation at the London Solaris User’s Group, but most of the content will be new and more focused than the top level LOSUG presentation.

There are similar presentations being presented at the Paris and Munich conferences by Eric Bezille and Franz Haberhauer, and we’re all presenting the same basic content as we’ve been working together on the presentation.

If you are in the region and can make it to the conference, I suggest you come. Not just for my presentation, there are other topics including performance tuning, HA, MySQL Proxy and using MySQL and memcached.

Resources for Running Solaris OS on a Laptop

As Solaris gets more and more popular I’m seeing more and more people running Solaris on a laptop as their primary operating system. I’ve even got friends who have migrated over completely to Solaris from Linux. I’ve been using it for years and managed to tolerate some of the problems we had in the early days, but today it works brilliantly on many machines.

I came across this article on BigAdmin, it’s old, but a lot of the information is still perfectly valid.

Read Resources for Running Solaris OS on a Laptop

Replicating multiple masters to one slave

As standard, MySQL allows replication from one master to multiple slaves, and that is a common scale-out scenario, but there have been a few comments recently, and some longer standing queries about having a setup that works the other way round, that is, multiple slaves replicating into a single master.

This a common enough scenario in data logging systems, where the data is collected locally and then distributed up to a central database, or in EPOS (Electronic Point of Sale) systems where you want the transactions logs from the tills logged up to the database at head office. There are many other situations where you want that merging of information.

Although MySQL doesn’t support what is called ‘multiple master, single slave’ solution, you can simulate the general approach by using a combination of replication and federated tables.

Replication allows for different table types on the master (the source of the data) and the slave. There are many advantages to this, for example, using InnoDB on the master to take advantage of transactions, while using MyISAM on the slave for read performance.

Federation allows you to access the tables of a remote server as if it were a local table. You can set up a federated table to access a remote table from as many machines as you like. That means that you can have two, or more, MySQL instances set up to use the remote table using the federated engine. You can execute any queries you like on the remote table, but you need to take care when using multiple hosts to access the remote table. Particularly when doing INSERT from multiple hosts, using InnoDB, Falcon, Maria or another table that supports multiple writers can be a good idea, although I’ll cover some workarounds for that later.

Using federated gives us the ability to write to the same table from multiple hosts, but you dont want to read and write from the same remote table all the time, especially if on your local machine (your till, or data collector) you want to be able to run your own queries.

This is where the replication fits in, if you set up replication from the master to another instance of MySQL, let’s call it ‘Fed Slave’ (which works both ways). On the Fed Slave, you configure the table or tables that you want to merge on the final ‘Slave’ machine to be federated tables. What happens is that data is replicated from the master to the Fed Slave, and on Fed Slave the queries are sent to the Merge Slave via federation. You can probably see this more clearly in the figure below.

To re-iterate:

  1. INSERT on Master 1 is replicated to Fed Slave 1
  2. Fed Slave 1 executes the INSERT on a Federated table which points to Merge Slave
  3. Merge Slave executes the federated statement on its local table

Each Fed Slave is relatively lightweight – all it’s doing is executing a statement and sending the statement over the network to the Merge Slave, so you could run it on the same machine as Master 1.

There are few problems with this design:

  1. Updating the same federated table from multiple hosts can get messy. There are a few ways you can get get round this, one is to stop the query execution on the slaves and only allow them to run during a set period of time. For example, let Fed Slave 1 execute the queries in the log from 1am to 2am, and Fed Slave 2 from 2am to 3am, and so on.
  2. Federation doesn’t get round the problems of duplicate IDs – if you try to run a statement on a federated table that inserts a duplicate ID it will fail just as will locally. You can get round this by making sure that the tables that hold the merge data on your Merge Slave dont have unique ID constraints, and that your Masters and all the table definitions contain a field to identify the source of the data in each case.
  3. Load can be an issue. One of the reasons I suggested InnoDB/Falcon/Maria is to help get round the multiple-insert and locking that is normally applied, but the very nature of the system means that locks and delays might still occur. You can’t eliminate it, but you can ease it.

I’ve tried and used this method in a number of situations, actually not for the reasons given above, but for performance logging from multiple hosts onto one. I’ll be honest and say that I’ve never seen a problem, but, at the same time, the type of data that I am collecting means that I would have been unlikely to notice a missing data point or two.

Comparing clusters, grids and clouds

A nice white paper on the differences between different large scale computing technology is available over at GridBus, The Grid Computing and Distributed Systems (GRIDS) Laboratory at the University of Melbourne.

The white paper provides a lot of information, including some more detailed information on the differences between the different massively-scalable computing systems, and how the current IT climate is ultimately proceeding towards the cloud computing infrastructure.

read more

Add to digg
Add to StumbleUpon
Add to Twitter
Add to Slashdot


MySQL University: Checking Threading and Locking With Helgrind

This Thursday, Stewart Smith will give a MySQL University session:

Checking Threading and Locking With Helgrind

Note that this particular session starts 9:00 BST / 10:00 CET /
18:00 Brisbane/Melbourne

Stewart is always enjoyable to listen to, both because he knows his stuff and because he is a really fun guy (heads up for the MySQL Conference 09, the Monty Taylor/Stewart Smith double act at this years conference was one of the most interesting and information sessions I went to).

Please register for this session by filling in your name on the session
Wiki page. Registering is not required but appreciated. That Wiki page
also contains a section to post questions. Please use it!

MySQL University sessions normally start at 13:00 UTC (summer) or 14:00
UTC (winter); see: MySQL University for more time zone information.

Those planning to attend a MySQL University session for the very first
time should probably read the instructions for attendees,
Instructions for Attendees.

See Upcoming Sessions for the complete list of upcoming University sessions.

How to analyze memory leaks on Windows

We use valgrind to find memory leaks in MySQL on Linux. The tool is a convenient, and often enlightening way of finding out where the real and potential problems are location.

On Windows, you dont have valgrind, but Microsoft do provide a free native debugging tool, called the user-mode dump heap (UMDH) tool. This performs a similar function to valgrind to determine memory leaks.

Vladislav Vaintroub, who works on the Falcon team and is one of our resident Windows experts provides the following how-to for using UMDH:

  1. Download and install debugging tools for Windows from here
    MS Debugging Tools
    Install 64 bit version if you’re on 64 bit Windows and 32 bit version
    otherwise.

  2. Change the PATH environment variable to include bin directory of Debugging tools.
    On my system, I added
    C:\Program Files\Debugging Tools for Windows 64-bit to the PATH.

  3. Instruct OS to collect allocation stack for mysqld with gflags -i
    mysqld.exe +ust
    .
    On Vista and later, this should be done in “elevated” command prompt,
    it requires admin privileges.

    Now collect the leak information. The mode of operation is that: take the
    heap snapshot once, and after some load take it once again. Compare
    snapshots and output leak info.

  4. Preparation : setup debug symbol path.
    In the command prompt window, do

    set _NT_SYMBOL_PATH= srv*C:\websymbols*http://msdl.microsoft.com/download/symbols;G:\bzr\mysql-6.0\sql\Debug

    Adjust second path component for your needs, it should include directory
    where mysqld.exe is.

  5. Start mysqld and run it for some minutes
  6. Take first heap snapshot

    umdh -p:6768 -f:dump1

    Where -p:
    actually, PID of my mysqld was 6768.

  7. Let mysqld run for another some minutes
  8. Take second heap snapshot

    umdh -p:6768 -f:dump2

  9. Compare snapshots

    umdh -v dump1 dump2 > dump.compare.txt

  10. Examine the result output file. It is human readable, but all numbers are
    in hex, to scare everyone except geeks.
  11. gflags -i mysqld.exe -ust

    Instruct OS not to collect mysqld user mode stacks for allocations
    anymore.

These are 10 steps and it sounds like much work, but in reality it takes 15
minutes first time you do it and 5 minutes next time.

Additional information is given in Microsoft KB article about UMDH
KB 268343.

Book: Intellectual Property and Open Source – the solution to IANAL

I’m reading Intellectual Property and Open Source by Van Lindberg at the moment, and despite being about a relatively dry topic, I must admit that it’s a fascinating read.

Van Lindberg introduces the book by talking about the comments that end up on Slashdot.org, almost certainly prefixed by the expression IANAL (I Am Not A Lawyer) where people defend, discuss, and rip people up about the legalities of open source and the various licenses. Van Lindberg also talks about how he spends much of his time translating the contents of various legal documents into engineer speak and back again.

Despite being a proponent and long time user of free software and open source for the best part of my working life, I’ll admit to being completely ignorant of many of the issues. This isn’t through lack of interest, but I’d rather leave those discussions and decisions to people who know, and it’s clear that Van Lindberg not only knows the subject, but he also knows how to make it interesting to those of us who actually have to work within the confines of rules and regulations.

I’m still reading and learning a lot of the ins and outs of copyright, company agreements, and individual licenses and details. There’s a lot of material and detail included here.

I’ll have a full review when I’ve finished. Until then, if you have even a passing interest in the various licensing, legal and IP issues with open source, check out the book for a proper read.

All the MCB Guru blogs that are fit to print