Handling the structure, and deciding which fields, data types, or data points, relate to which fields or target elements within the destination database are one thing. Normally you can make an intelligent decision about the information that is being transferred and how that structural information should be handled.
The actual data can be a completely different problem. There are so many potential problems here with simply parsing and understanding the data that we need to look at some of the more common issues and how they should be addressed.
For most cases, the content and structure of the individual types is about understanding the two dimensions of the problem
- Supported types – that is, whether the target database understands, or even identifies the underlying type. For example, within most RDBMS environments, most data types are very strictly enforced, but in NoSQL or datawarehouse environments they corresponding engines may either not care, or only care in specific situations.
- Parseable information – the information may be readable or storable, but not to the same precision or definition. For example, with time values, some store hours, minutes and seconds, some store hours and minutes, and a whole variety store different lengths of fractions of seconds, to one or even 15 points of precision. At the end of the same spectrum, some dates are represented by seconds, others by a string in various formats.
Let’s look at the finer details of understanding these different types and how they can be stored, exchanged and ultimately parsed.
Basic Types
There are four basic types that make up all data and that, on the whole, get represented within a database system. Identifying these four types will help us to set the groundwork for how we handle, identify and treat the rest of the data:
- Numeric – basically any kind of number or numeric value, from integers through to floating point and even ‘Big’ numbers.
- Strings – Strings seem like a very simple list of characters, but they aren’t as straightforward as you might think.
- Dates – Dates are a special type all of their own. Although they look like the worst parts of numbers and strings combined, ensuring that they are identifiable as a date in the target database relies on understanding the different formats, separators and structures.
- Binary – any type of data that does not fall into the previous three groups is binary data. Here the issue is one of identity and the ability to interpret the content once it reachs the destination database.
Remember that in all of these situations, we are not only talking about one time formatting of the information. We could be dealing with frequent and regular exchanges of this information between the databases, and even need to perform these changes and differences regularly if the data is integrated across multiple database environments.
When combining, for example, MongoDB data with Oracle information for the processes of reporting, you need to do more than change the format once. It needs to be in a common representable format for both databases throughout the life of the data, while simultaneously ensuring that the information is best stored within each database to get the performance you need.
Strict and Relaxed Translation
Whenever you are moving data between different databases you need to determine whether the type and structure of information is important enough, or critical enough, that it must be represented in its native format.
That may sound like a completely nonsensical approach to data – surely the quality of the data is absolutely critical and should be represented as such everywhere? In theory it should, but different database storage environments treat and handle the data in different ways according to the basic type in question.
To understand why this is important, we need to look back both historically and technically why information was stored in the strict formats we described in the last section.
In any old, and particularly RDBMS-based database solution, data was stored into fixed types and with fixed lengths so that the record could be manipulated as efficient as possible. We saw some examples of this in Chapter 1. For numerical values, it is much more efficient to store a 32-bit integer as just 4 bytes of data than it is to store the string 2147483647 (which would take 9 bytes).
Similarly, with string types, the primary consideration has always been to minimize the amount of storage reserved for the string because handling bigger strings, or bigger potential blocks for strings, was more expensive in computing time, memory space, and disk space. Back when databases ran on machines with 512KB of RAM, devoting massive blocks of memory to non-usable space allocated but not used to store data just wasn’t an option. This is why 8 character filenames and two or three letter codes for a variety of databases and storage methods became common.
In modern systems of course, we actually have almost the opposite problem. Data sizes are now regularly so large that we need to be prepared to handle massive blocks of information whereas before that might have been impossible. This is fine when we are moving data from a traditional RDBMS to say Hadoop, because we move from a strict environment to a very relaxed one. But when moving in the opposite direction this is not true.
To make matters worse, in many Big Data environments, including most of the Hadoop database layers like Hive, the datatype is only significant at the time the data is queried. Within Hive you can load a CSV file that contains a variety of different types, but unless you explicitly tell Hive that the fifth column is a string or a number, Hive doesn’t really care, and let’s you export and query the data as normal.
For example, here’s a table with a mixture of different column types, here using the strict datatypes to define the actual content for each column:
hive> select * from stricttyping;
OK
1 Hello World 2014-10-04 12:00:00 439857.34
1 Hello World 2014-10-04 12:00:00 157.3457
1 Hello World 2014-10-04 12:00:00 4.8945796E7
Now we can view the same data, this loaded into a table where each column has been defined as a string:
hive> select * from relaxedtyping;
OK
1 Hello World 2014-10-04 12:00:00 439857.345
1 Hello World 2014-10-04 12:00:00 157.3457
1 Hello World 2014-10-04 12:00:00 48945797.3459845798475
The primary differences are in the handling of floating point values – the top strict table loses precision (the value was a FLOAT), and at the bottom the value is represented as a DOUBLE with a loss of precision digits. In fact, within Hive, the data is not parsed into the corresponding type until the data is used or displayed. If you examine the raw CSV file:
$ hadoop fs -cat stricttyping/sample_copy_1.csv
1,Hello World,2014-10-04 12:00:00,439857.345
1,Hello World,2014-10-04 12:00:00,157.3457
1,Hello World,2014-10-04 12:00:00,48945797.3459845798475
In fact, many people deliberately don’t explicitly load the data into fixed type columns; they define the column types as strings and then import the date and ultimately ignore the real type until they have to parse or understand it for some reason.
Similarly, in NoSQL environments, the data types may really only be for explicitly representation requirements, and have no effect on the ability to either store or display and query the information. Even in a traditional RDBMS, there is no requirement to explicitly store certain values in certain column types, but certain operations may be limited. For example, most RDBMSs will not perform a SUM() operation on a string column.
The bottom line is that you will need to think about whether to explicitly make use of these columns because you need them as specific types in the target database, or whether to ignore them completely.
- Strict transformations – Use strict datatypes when the information you want to store must be correctly interpreted within the target database, and it provides some form of performance advantage, unless doing so reduces the validity or precision of the information.
- Relaxed transformations – Use relaxed transformations whenever the processing or target system does not support the required precision, or in those cases where the processing of the information is irrelevant. Most t ransfers to NoSQL and Big Data environments fit this model automatically.
With this options to you in mind, let’s look at some of the more specific types available.
Handling Numeric Values
Simple, plain, integers are supported by nearly all databases as explicit and identifiable types. Even document databases such as MongoDB and Couchbase understand the significance of a numeric value over a string representation.
However, if you are transferring big integers, be conscious of the limitations of the target database. Some environments explicitly support very large integers. Hive, for example, supports the BIGDECIMAL datatype, which holds numbers with up to 10 to the power of 308. Others do not.
Floating Point Issues
The biggest problem with floating point values is one of precision and storage capability. There are large variations between the supported types, how much is stored and how precise it can be. Further more, some databases specifically differentiate between decimal and floating point values and have different rules for how these should be represented and stored, and the two are not necessarily compatible
For floating-point values, the main issues are:
- Representation – float values are generally displayed as a decimal value, for example:
3247234.32434
- There are no specific rules for this, but many values are based on the support of the operating systems own data type support. On modern 32-bit (and 64-bit) systems, floating-point values tend to have 7 digits of precision after the decimal point. This is due to the nature of the structure used to store and define them internally. A double has twice the precision, up to 15 or even 16 digits past the decimal point.
- Parsing – these values properly is critical if you are storing the data; unfortunately rounding-errors, both made when the data is output, and when it is parsed back, are notoriously difficult, and not always religiously honoured.
For this reason, some database explicitly support a DECIMAL type. Unlike the float, the DECIMAL type works more like two integers either side of the decimal.
Processing these values reliably, and storing them in a target database may lead to problems if the target system doesn’t support the datatype size, precision, or structure properly. Moving the data may lose the precision or content. On a simple movement of the data in an export/import environment might parse or store it correctly, or it may lose or truncate the precision entirely.
If you are replicating and regularly exchanging data from one database to the other and back again, these precision errors can build up to translate and convert a number from one value to one statistically significant. If the double type within the databases environment does not support the complexity or precision of the values involved, consider using one of the big integer types and a suitable multiplier.
Finally, if the target database does not support the precision and detail that you need, consider moving the data using relaxed methods, for example by importing the data into a string, rather than a numerical type so that it can be reliabily stored.
Base-N Numbers
If you are exchanging numbers in other than base 10, for example, octal, hexadecimal, or others, ensure that the target database supports the required number format. If an explicit number format is not supported, either translate the number to decimal and handle the output and formatting of the data as the corresponding type within the target database and application, or use the relaxed method and keep it as a string.
Strings and Character Encoding
More problems are created by strings than you might think, and the most significant is usually the character set used to store, transfer, and import the data. Character sets used to refer to the difference between the byte-level encoding for things like EBCDIC and ASCII. Today, they span a much wider array of issues as the number of character sets and the use of a wider range of languages, characters, and ideographs increases.
The best way to encode strings when moving the data between databases is to use either UTF-8 (which encodes Unicode character in 8-bit bytes) or one of the high-bitrate encodings if your data requires it. For example, if you are specifically storing foreign-language, katana, or Chinese characters, using UTF-16 or UTF-32 may be more reliable, if not necessarily more efficient. UTF-8 can be used for a very wide range of different Unicode characters and is rarely a hindrance.
Also be aware that some databases identify character encoding capabilities and data types differently. For example, the VARCHAR2 type within Oracle can be used to store strings with an optional size (byte or character) declaration, but the NVARCHAR2 type is the Unicode (byte) sized datatype. The definition of the column and size can also be different. In Amazon RedShift for example, the size of VARCHAR column is defined in bytes, but in MySQL it’s defined in characters, so a VARCHAR(20) in MySQL has to be a VARCHAR(80) in RedShift. Sneaky.
A secondary issue is one of storage space. Different database environments support different representations of the different character storage types, and sometimes have wildly different performance characteristics for these different types.
Within a NoSQL or Big Data environments, the length (or size) of the string is rarely a problem, as they don’t have fixed or strict datatypes. However, for most RDBMS environments there are specific lengths and limits. Oracle supports only 4000 bytes in VARCHAR2 for example; MySQL supports 255 bytes in a CHAR, or 65535 bytes in a VARCHAR.
Finally, when transferring the information you may need to pay attention to any delimiters. Using CSV, for example, and using quotes to define the field limits only works when there aren’t quotes in the field content.
Dates and Times
Of all the different data types that we have covered up to now there have been problems with understanding and parsing the values because of differences in the types, format, or structure of the data, but all of them were largely covered within some simple limits and structure.
Dates are another problem entirely. In particular:
- Date precision and format
- Time precision and format
- Dates or Epochs?
- Time Zones
All go together to make for one of the most complicated of the all the types supported when transferring data, because there are so many times where it can go wrong.
Epochs
Epoch values are those where the data is represented as an integer counting, usually, the seconds from a specific reference point in time, from which the current date can be calculated. For example, Unix-based Epoch times are represented as the number of seconds that have elapsed since Jan 1st 1970 at 00:00:00 (12:00am) GMT. Counting forward from this enabels you to represent a date. For example, the value:
1413129365
Is in fact 12th October 2014.
There are two issues that arise from Epoch dates, time drift and date limits.
Time drift occurs if the date has been stored as an epoch that is relative to the current timezone. This can actually happen more frequently than you realize if dates are reconstituted back to an Epoch from a local time based balue into an Epoch. For example, some libraries that parse a date without an explicit timezone will assume that the date is within the current timezone of the system.
This is a particularly thorny problem when you realize that epochs have no timezones of their own. This means that the Epoch value:
1413129365
Is 15:56 BST, but 07:56 PST. If you now transfer a PST-based epoch to GMT and then use it without conversion, all your times will be out by 8 hours. If you ran batch jobs on the imported data at 1am, that time would actually refer to a completely different day.
If you must use epoch values, ensure that you either know what the timezone was, or adjust the value so that it is against GMT and you can translate to the appropriate timezone when you need to. Also see the secion on timezones below.
The secondary problem is date limits. Traditionally epoch values were stored as 32-bit integers, which limits the date between 1970 and 2038. While this is fine for current times (at least for the next 24 years or so), for any future dates, this can be an issue.
If you are porting epoch values to a target that only supports 32-bit dates, and the date is beyond 2038, don’t transfer it using the epoch value, translate it into an explicit date that can be parsed and stored in whatever local format is required for the target environment. For example, within MySQL you can use the FROM_UNIXTIME() function to translate your epoch date into something more usable.
Date Formats
When transferring dates, use a format that is unambiguous and supported by the target system. Different locations and systems have different ways of representing dates, including the different seaprators that are used, and the different orders of the components. Even the use of the prefix for some numbers differs between regions. Some examples are shown in the table below.
Location/Format |
Example |
USA |
Month.Day.Year |
Japan |
Year-Month-Day |
Europe |
Day.Month.Year |
UK |
Day/Month/Year |
Different locations and date formats
The best format to use is usually the ISO format:
YEAR-MONTH-DAY
With a zero prefix added to each value to pad it to the correct number of characters. For example, the the 1st of January:
2014-01-01
Or the year 1:
0001-01-01
The ISO format is not only readable on just about every single platform, it also has the advantage of being sortable both numerically and by ASCII code, making a practical way of exporting and loading data in date order without having to explicitly order data by dates.
Time Formats
Time is usually restricted to a fairly obvious format, that of:
Hours:Minutes:Seconds
Or in some regions and standards:
Hours.Minutes.Seconds
Aside from the timezone issue, which we will look at next, the other problem is the level of precision. Some databases do not support any precision beyond seconds. For example, within Oracle you can store precision for eseconds up to 9 decimal points. Amazon RedShift supports only 6 digits of precision.
Also be aware that some environments may not support explicit date and time types, but only a unified datetime or timestamp type. In this case, the structure can be even more limited. For example, within Amazon RedShift, the timestamp datatype is actually formatted as follows:
YYYYMMDD HH:MM:SS
With the date in ISO format but without explicit date separators.
Time Zones
Every time represented everywhere is actually a time within a specific timezone, even if that timezone is UTC (Universal Time Coordination). The problem with timezones is that the timezone must either be explicitly stored, shared, and represented, or it should be stated or understood between the two systems that the time is within a specific known timezone. Problems occur either when the timezone is correctly represented, or assumptions are made.
For example, the following time:
2014-09-03 17:14:53
Looks clear enough. But if this has come from the BST (British Summer Time) timezone and gets imported into a database running in the IST (India Timezone) then you start to get the time stored in the wrong format if the timezone is not explicitly specified.
Another issue is when there are timezone differences when data is transferred, not because of the physical time difference, but because of the effect of daylight savings time. Transferring data from, say, GMT to PST is not a problem if you know the timezone. Transfer the data over during a daylight savings time change, and you can hit a problem. This is especially true for timezones that have different dates for when daylight savings time changes.
Finally, be prepared for databases that simply do not support the notion of timezones at all. To keep these databases in synchronization with other databases with which you might be sharing information, the easiest method is to use GMT.
In general, the easiest solution for all timezone related data is to store, transfer, and exchange the data using the UTC timezone and let the target database handle any translation to a localized timezone. If you need to explicitly record the timezone – perhaps because the data refers to a specific timezone as part of the day – then use the time type that supports it, or store a second field that contains the timezone information.
Compound Types
We’ve already looked at some of the issues in terms of the structural impact of compound types. Even if you have the ability to represent a value as a compound structure within your target data, you need to understand the limitations and impact of compound types, as not all systems are the same. Some of these limitations and effects are database specific, others are implementation specific.
For example, within MySQL and PostgreSQL, the ENUM type enables you to store a list of fixed string-like values that can be chosen from a fixed list. For example:
ENUM(‘One’,’Two’,’Three’,’Four’)
The benefit of this from a database perspective is that internally each string can be represented by a single number, but only reconstituted into the string during output. For targets that do not support it, therefore, the solution is to translate what was an ENUM column in MySQL into a string in the target database.
MySQL also supports the SET type, which is similar to ENUM, except that the value can refer to muiltiple options. For example:
SET(‘Mon’,’Tue’,’Wed’,’Thu’,’Fri’,’Sat’,’Sun’)
The SET type enables you to record not only the specific day, but maybe a group of days, for example:
INSERT INTO table VALUES (‘Mon,Wed,Fri’)
Again, interally this information is represented this time as a BIT, and so the actual data is implied and displayed as a compound type.
When translated to a database that doesn’t support the type, you may either want to create an additional column for each value to store it, or, if you are using a document database, you may want the set firled converted to an array or hash of values:
{
‘Mon’ => 1,
‘Wed’ => 1,
‘Fri’ => 1
}
Always consider how the data was used and will be searched on either side of the database transfer. In an SQL RDBMS queries normally take the form:
SELECT * FROM TABLE where FIND_IN_SET(‘Mon’,days)>0;
That is, return all the values where the field contains the value ‘Mon’. In a database that supports searching or indexing on individual values (MongoDB, Couchbase), the key-based transfer, where we effectively set the member of a hash to a meaningless value so that we can do key-based lookups. We’ll examine this in more detail when we examine the rnvironments of these databases.
Serialized and Embedded Formats
For a whole variety of different reasons, some people store more complex formats into their database so that they can bmore easily be manipulated and used within the depper element sof their application.
For example, serializing an internal structure, for example, a Perl object or a Java object so that it can be stored into a field or BLOB within the database is a good way of making use of complex internal structures and still have the ability to store and manipulate the the more complex data within the application environment.
If all you want is to transfer these the serialized format from one database to another, then the basics are unlikely to change. You may need to use the binary translation methods in the next section to realisitically get the data over into the new database reliably, but otherwise, the transfer should be straightforward.
However, there is also the possibility that you want to be able to query or extract dta that may have been embedded into the serialized object.
In this case, you need to change the way that you use and manipulate the information as part of the data migration process. In this case, you may want to take the information and either expand the data to expose the new fields as transferrable data.
Or, you may more simply want to change the content of the information from its serialized format into a more universal format, such as JSON.
Binary and Native Values
Binary data, that is, data that is explicitly stored and represented in a binary format is difficult to process when moving data.
- Binary means that single-character delimiters become useless, even control characters. 0x01 is just as likely to come up in binary data as it is when used as a field separator.
- Pure, native, binary data suffers from the problems of ‘endianess’, that is, the byte order of the individual bytes. Test and numerical translations don’t tend to suffer from this because systems know how to parse text. When exchanging binary data, the endianness of the data applies.
Binary data can also be affected by any translation or migration process that is expecting a string representation of information. For example, it is not uncommon for UTF8 to be used when reading binary data, which leads to interpretation and storage problems.
In general, the best advice for true binary information is for the data to be encoded into one of the many forms of binary-to-hex translation formats. This can include solutions such as raw hex conversion, where the data is quite literally expanded to a two-character hex string for each binary byte. For example, we can translate any byte strinf into hex values with tools like Perl:
$ perl -e "print unpack('H*','Hello World')"
48656c6c6f20576f726c64
Or use uunencode:
begin 666 HelloWorld
,2&5L;&@5V]R;&0*
`
end
Or use the MIME64 standard that is employed in many modern email and Web environments for transferring attachments, as it ensures that even multi-byte cahracters are effectively transferred.
All of these solutions can be easily processed on the other side back into the binary format according to the endianess of the host involved.