You’re reading this series of posts because you want to learn about databases and how to use them.
What you should not be doing is learning about databases and how to use them, while working inside a production environment.
Also called “prod”, because we’re lazy, this server (or servers) is not meant for testing things out. We should always make sure that when practising new bits of code, we do it in a development environment (“dev”). At least if we make a mistake (and mistakes happen a lot in this field), it won’t cause the main payroll system to crash.
The best way to set up a development server is to create a virtual machine where you can install anything you like, and if something goes wrong, you can delete and rebuild the virtual machine without causing monetary and/or data loss.
I’m not kidding. Go set up a virtual machine. If you don’t know how, then ask me and I’ll explain it (there’s a future blog post for setting up a virtual machine).
Taking a short break from the Database Fundamentals series of the last few weeks, I’d like to mention some upcoming PASS community events in the province of Alberta.
I will be presenting at SQLSaturday #594 in Edmonton on 22 April 2017 (this coming Saturday). My topic is Migrating to Azure SQL Database: Tips, Tricks and Lessons Learned.
Next weekend, I will be hosting SQLSaturday #607 in Calgary on 29 April 2017. This is the first ever SQLSaturday in the city of Calgary, and we even have a special message from our celebrity mayor, Naheed Nenshi.
If you live in or around these two cities, please come and say hi. You can also reach out to me on Twitter at @bornsql or @sqlsatcalgary.
The Database Fundamentals series will continue next week.
A friend of mine in the filmmaking business, who is exceedingly bright but has never worked with SQL Server before, was reading through the first five posts of this Database Fundamentals series, and asked a great question:
“I guess I’m not understanding what a byte is. I think I’m circling the drain in understanding it, but not floating down.”
She has a way with words.
I answered her immediately, but it reminded me that I did get a little carried away with data types, assuming that everyone reading that post would understand what a byte is.
In the innards of the computer is the CPU, or Central Processing Unit (there might be more than one in a server). The CPU is best described as a hot mess of on-off switches. Just as it is in your house, a switch only has two states.
This is what “binary” means. When the CPU clock ticks over, billions of times per second, if a switch is closed, it’s a 1 (electricity can flow to complete the circuit). If the switch is open, it’s a 0 (electricity cannot pass through it).
The CPU (and memory, and storage system, and network) understand binary, and the software that sits on top of it uses binary as well.
We end up with a series of 1s and 0s that, when arranged in different combinations, represent information in some form or another. Each of these is a binary digit, or bit.
Through a series of decisions in the old days of computing, when we stick eight of these bits of data together, they form a byte.
Now comes the mathematical part of today’s post.
If we have 8 digits that can store two values each, we get a total of 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 combinations. This is more easily typed as 2^8, or 256. In other words, a byte can store a maximum of 256 values.
Here’s a short list of bytes to give you an example (I have not listed every one of the 256 possibilities). We write the bits in groups of four to make them easier to read.
There are values missing from the above table, for characters that cannot be displayed correctly in a web browser. For a complete table showing all 256 characters, visit PC Guide.com.
How does this affect Unicode values? If you remember in our post about CHAR, NCHAR, VARCHAR and NVARCHAR data types, we discovered that the Unicode versions (those types starting with N) will use two bytes in memory and on disk to store a single character, compared to the non-Unicode (sometimes called ASCII or plain text) data types, which use only one byte per character.
The high-level reason for this is that some alphabets have more than 256 characters, so the code page (the full set of characters in upper- and lower-case where applicable, plus all the numbers, punctuation marks, and so forth) won’t fit in the 256 possibilities available in a single byte.
When we stick two bytes together however, we suddenly have as many as 2^16 values that we can store, for a total of 65,536 possibilities. This is mostly good enough if you’re not storing Japanese in SQL Server.
There are exceptions to this, where some kanji takes up four bytes per character. This is known as UTF-32 (Unicode Transformation Format, 32 bits per character). The good news is, SQL Server does support multi-byte characters wider than standard (UTF-16) Unicode, as long as we pick the correct collation.
I hope this answers any burning questions you may have had about bits and bytes.
Feel free to reach out to me on Twitter at @bornsql.
If there’s one thing that SQL Server is really good at, it’s relationships. After all, a relational database management system without the relationships is nothing more than a place to store your stuff.
Last week we briefly looked at a denormalized table, and then I suggested that breaking it up into five separate tables would be a good idea. So good, in fact, that it took me more than 2,000 words to explain just the first table in our highly contrived example.
Assuming you have read through all those words, let’s attempt a much more condensed look at the other four tables. If you recall, we had:
We tackled the Stores table first because everything is backwards when we design databases.
For the next three tables, I’m going to just show you how I would design them, without going into detail. Take a close look at Salespersons, though (which we’ll look at first) because it will give you a clue about how we link all the tables together finally in the Transaction table.
Then take a look at … PaymentTypes? ProductTypes? Colours? Categories? Sizes? Uh … What’s going on here? Where did all those tables come from? Luckily, T-SQL allows comments, which you’ll see below.
CREATE TABLE [Salespersons] (
[SalespersonID] SMALLINT NOT NULL IDENTITY(1,1),
[StoreID] SMALLINT NOT NULL,
[FirstName] NVARCHAR(255) NOT NULL,
[LastName] NVARCHAR(255) NOT NULL,
[EmailAddress] NVARCHAR(512) NOT NULL
CONSTRAINT [PK_Salespersons] PRIMARY KEY CLUSTERED ([SalespersonID] ASC)
-- List of possible payment types, (e.g. credit card, cheque)
CREATE TABLE [PaymentTypes] (
[PaymentTypeID] TINYINT NOT NULL IDENTITY(1,1),
[Description] VARCHAR(255) NOT NULL,
CONSTRAINT [PK_PaymentTypes] PRIMARY KEY CLUSTERED ([PaymentTypeID] ASC)
CREATE TABLE [Customers] (
[CustomerID] BIGINT NOT NULL IDENTITY(1,1),
[FirstName] NVARCHAR(255) NOT NULL,
[LastName] NVARCHAR(255) NOT NULL,
[EmailAddress] NVARCHAR(512) NOT NULL,
[Telephone] VARCHAR(25) NOT NULL,
[PaymentTypeID] TINYINT NOT NULL,
CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ([CustomerID] ASC)
-- List of possible product types (e.g. iPhone, iPhone cover, iPod)
CREATE TABLE [ProductTypes] (
[ProductTypeID] TINYINT NOT NULL IDENTITY(1,1),
[Description] VARCHAR(255) NOT NULL,
CONSTRAINT [PK_ProductTypes] PRIMARY KEY CLUSTERED ([ProductTypeID] ASC)
-- List of possible colours
CREATE TABLE [Colours] (
[ColourID] TINYINT NOT NULL IDENTITY(1,1),
[Description] VARCHAR(255) NOT NULL,
CONSTRAINT [PK_Colours] PRIMARY KEY CLUSTERED ([ColourID] ASC)
-- List of possible categories (5.5", 4.7", 4", 3.5")
-- This replaces "size", since we might use Size to denote storage
CREATE TABLE [Categories] (
[CategoryID] TINYINT NOT NULL IDENTITY(1,1),
[Description] VARCHAR(255) NOT NULL,
CONSTRAINT [PK_Categories] PRIMARY KEY CLUSTERED ([CategoryID] ASC)
-- List of possible sizes ("8GB", "16GB", "32GB", etc.)
-- Can also be used for other product types like laptops
CREATE TABLE [Sizes] (
[SizeID] TINYINT NOT NULL IDENTITY(1,1),
[Description] VARCHAR(255) NOT NULL,
CONSTRAINT [PK_Sizes] PRIMARY KEY CLUSTERED ([SizeID] ASC)
CREATE TABLE [Products] (
[ProductID] TINYINT NOT NULL IDENTITY(1,1),
[ProductTypeID] TINYINT NOT NULL,
[ColourID] TINYINT NOT NULL,
[CategoryID] TINYINT NOT NULL,
[SizeID] TINYINT NOT NULL,
[SellingPrice] SMALLMONEY NOT NULL,
CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ([ProductID] ASC)
Those tables popped out of nowhere, didn’t they? Welcome to the world of normalization. To design a database properly, we come to the realisation that we can simplify the data input even more, reducing repeated values, and finding the most unique way of representing data. Products comprise product types. A single list of colours can be reused in various places. Payment types can be used for all sorts of transactional data.
We end up with a lot of tables when we normalize a database, and this is perfectly normal. When we want to read information out of the system the way senior management wants, we must join all these tables together.
The only way to join tables together in a safe and meaningful way is with foreign key relationships, where one table’s primary key is referenced in another table, with a matching data type.
The SalesPersons table has a StoreID column. As it stands, there’s no relationship between Stores and SalesPersons until we create the relationship using T-SQL.
Line 1: Inform SQL Server that we are altering an existing table
Line 2: By adding a foreign key constraint (i.e. limiting what can go into the StoreID column)
Line 3: By forcing it to use the values from the Stores table’s StoreID column (i.e. the primary key).
In a relationship diagram, it looks like this (in SQL Server Management Studio’s Database Diagram tool):
The yellow key in each table is the Primary Key (StoreID and SalespersonID respectively). There is a StoreID column in both tables with the same data type (SMALLINT). The foreign key (FK) does not have to match the name of the primary key (PK), but it makes things a lot easier to have the same name for both sides of a relationship in large databases, so it’s a good habit.
Notice the direction of the relationship (FK_Salespersons_Stores) in the picture, with the yellow key on the table with the Primary Key. The name of the relationship is also sensible. To the casual eye, this says that there’s a Foreign Key constraint in the Salespersons table that points to the Primary Key in the Stores table.
Now we see why data types are so important with relational data. A relationship is not even possible between two tables if the data type is not the same in both key columns.
With this constraint enabled, whenever we insert data into the Salespersons table, we have to make sure that whatever we put into the StoreID column must already exist in the Stores table.
Let’s do the rest of the relationships so far, and then we’ll look at the Transactions table.
Transactions table: 96 bytes (32 bytes per transaction)
TransactionID: 8 bytes
TransactionDate: 7 bytes
ProductID: 1 byte
DiscountPercent: 5 bytes
SalesPersonID: 2 bytes
CustomerID: 8 bytes
HasAppleCare: 1 bit (expands to 1 byte)
GRAND TOTAL: 638 bytes to represent all three transactions
The denormalized version, for which we can see the original example below, works out as follows. Recall we said last week that each column was NVARCHAR(4000), or possibly even NVARCHAR(MAX).
At our most generous, we would need 1,166 bytes to record these three transactions. That’s almost double the data required, just for these three. Plus, the data has no foreign key relationships, so we cannot be sure that whatever is being added to the denormalized table is valid.
As time goes on, the normalized tables will grow at a much lower rate proportionally. Consider what a denormalized Transactions table would look like with an average row size of 388 bytes, for ten million rows (3.6GB).
Compare that to a normalized database, with ten million transactions for 8 million customers. Even assuming we have a hundred products, with twenty colours, and 30 product types, we would see only around 1GB of space required to store the same data.
We know Apple as being one of the most successful technology companies in terms of sales, so extrapolating to 1 billion transactions, we’d be comparing 361GB (for the denormalized table) with less than half that (178GB) if every single customer was unique and only ever bought one item.
Aside from the staggering savings in storage requirements, normalization gives us sanity checks with data validation by using foreign key constraints. Along with proper data type choices, we have an easy way to design a database properly from the start.
Sure, it takes longer to start, but the benefits outweigh the costs almost immediately. Less storage, less memory to read the data, less space for backups, less time to run maintenance tasks, and so on.
Next week, we talk briefly about bits and bytes, and then we will start writing queries. Stay tuned.
Find me on Twitter to discuss your favourite normalization story at @bornsql.
When we want to use these data types for our columns, we need to declare them. Some require a length, some require a precision and scale, and some can be declared without a length at all. For example:
No Length (implied in data type): DECLARE @age AS TINYINT;
Explicit Length (length is supplied): DECLARE @firstName AS VARCHAR(255);
Precision and Scale: DECLARE @interestRate AS DECIMAL(9,3);
Let’s talk a bit about precision and scale, because those values between the brackets may not work the way we think they do.
Precision and Scale
Data types with decimal places are defined by what we call fixed precision and scale. Let’s look at an example:
In the above number, we see a six-digit number (ignoring the thousand separator) followed by a decimal point, and then a fraction represented by three decimal places. This number has a scale of 3 (the digits after the decimal point) and a precision of 9 (the digits for the entire value, on both sides of the decimal point). We would declare this value as DECIMAL(9,3).
This is confusing at first glance, because we have to declare it “backwards”, with the precision first, and then the scale. It may be easier to think of the precision in the same way we think of a character string’s length.
Date and time data types can also have decimal places, and SQL Server supports times accurate to the nearest 100 nanoseconds. The most accurate datetime is DATETIME2(7), where 7 decimal places are reserved for the time.
Before SQL Server 2008, we used DATETIME, which is only accurate to the nearest 3 milliseconds, and uses 8 bytes. A drop-in replacement for this is DATETIME2(3), using 3 decimal places, and accurate to the nearest millisecond. It only needs 7 bytes per column.
Be mindful that, as higher precision and scale are required, a column’s storage requirement increases. Accuracy is a trade-off with disk space and memory, so we may find ourselves using floating point values everywhere.
However, in cases where accuracy is required, always stick to exact numerics. Financial calculations, for example, should always use DECIMAL and MONEY data types.
Exact Numerics are exact, because any value that is stored is the exact same value that is retrieved later. These are the most common types found in a database, and INT is the most prevalent.
Exact numerics are split up into integers (BIGINT, INT, SMALLINT, TINYINT, BIT) and decimals (NUMERIC, DECIMAL, MONEY, SMALLMONEY). Decimals have decimal places (defined by precision and scale), while integers do not.
Integers have fixed sizes (see table below), so we don’t need to specify a length when declaring this data type.
-2^63 to 2^63-1
-2^31 to 2^31-1
-2^15 to 2^15-1
0 to 255
0 to 1
BIT is often used for storing Boolean values, where 1 = True and 0 = False.
Yes, BIGINT can store numbers as large as 2 to the power of 63 minus 1. That’s 19 digits wide, with a value of 9,223,372,036,854,775,807, or 9.2 quintillion.
Decimals may vary depending on the precision and scale, so we have to specify those in the declaration.
5 to 17 bytes
Depends on precision and scale.
38 digits is the longest possible precision.
DECIMAL and NUMERIC are synonyms and can be used interchangeably. Read more about this data type, and how precision and scale affects bytes used, here.
Although the MONEY and SMALLMONEY data types do have decimal places, they don’t require the precision and scale in the declaration because these are actually synonyms for DECIMAL(19,4) and DECIMAL(10,4) respectively. Think of these data types for convenience more than anything.
-922,337,203,685,477.5808 to 922,337,203,685,477.5807
-214,748.3648 to 214,748.3647
Approximate Numerics mean that the value stored is only approximate. Floating point numbers would be classified as approximate numerics, and these comprise FLOAT and REAL.
Declaring a FLOAT requires a length, which represents the number of bits used to store the mantissa. REAL is a synonym of FLOAT(24).
The mantissa means the significant digits of a number in scientific notation, which is how floating point numbers are represented. The default is FLOAT(53). Generally, we stick to the defaults, and use REAL if we want to save space, forgoing some accuracy of the larger FLOAT(53).
4 or 8 bytes
-1.79E+308 to -2.23E-308, 0 (zero),
and 2.23E-308 to 1.79E+308
-3.40E+38 to -1.18E-38, 0 (zero),
and 1.18E-38 to 3.40E+38
Date and Time
Date and time data types are slightly more complex. For storing dates (with no time), we use DATE. We store times (with no dates) using TIME. For storing both date and time in the same column, we can use DATETIME2, DATETIME, or SMALLDATETIME. Finally, we can even store timezone-aware values comprising a date and time and timezone offset, using DATETIMEOFFSET.
DATETIME2, TIME, and DATETIMEOFFSET take a length in their declarations, otherwise they default to 7 (accurate to the nearest 100 nanoseconds).
As we saw last week, characters can be fixed-length (CHAR) or variable-length (VARCHAR), and can support special Unicode character types (NCHAR and NVARCHAR respectively). Collation should also be taken into account.
Length can be 1 to 8000 for CHAR and VARCHAR, or 1 to 4000 for NCHAR and NVARCHAR. For storing values larger than that, see the Large Objects section below.
Sometimes we want to store binary content in a database. This might be a JPEG image, a Word document, an SSL certificate file, or anything that could traditionally be saved on the file system. SQL Server provides the BINARY and VARBINARY data types for this (and IMAGE for backward compatibility).
Length can be 1 to 8000 for BINARY and VARBINARY. For storing values larger than that, see the Large Object section below.
SQL Server 2008 introduced a new MAX length for several data types, including CHAR, NCHAR, VARCHAR, NVARCHAR, BINARY and VARBINARY.
(The XML data type uses MAX under the covers as well.)
This new specification allows up to 2 GB of data to be stored in a column with that declared length. We should take care not to use 2 GB when inserting data into these columns, but it provides greater flexibility when inserting more than 8000 bytes into one of these columns.
Other Data Types
SQL Server supports other types of data, which fall outside the scope of text and numerics. These include CURSOR, TABLE, XML, UNIQUEIDENTIFIER, TIMESTAMP (not to be confused with the date and time types), HIERARCHYID, SQL_VARIANT, and Spatial Types (GEOGRAPHY and GEOMETRY).
Next week, we will see how normalization and data types work together, now that we have a good overview of the different data types in a database.
If you have any thoughts or comments, please find me on Twitter at @bornsql.
Last week we started with a very simple definition of a database: a discrete set of information, with a specific structure and order to it.
We briefly looked at normalization, which is a way to store as little of the information as possible, so that it stays unique.
We will cover more normalization as we move forward through this series, but first we will talk about how the information, or data, is stored. (This does affect normalization and relationships, even if that is not immediately clear.)
For this week’s discussion, we need to consider a spreadsheet application, like Microsoft Excel or Google Sheets.
Columns and Rows
In a spreadsheet, we have columns and rows. Usually we will also have a header row, so we can distinguish between each column.
Each column, in turn, may be formatted a certain way so that we can easily see what kind of information is in that column. For instance, we may want to format a column as a currency, with leading symbol and the two decimal places at the end.
We may left-align text values, and we may decide that numbers have no decimal places and are right-aligned. Dates and times get their own formatting.
If we were to compare this structure to that of a database, we can imagine that each sheet is a table, and each column and row is a column and row in the table.
In some applications like Microsoft Access, we may hear different terminology for columns and rows, namely fields and records. However, in SQL Server, we maintain the same convention as Excel and call them columns and rows.
Because SQL Server doesn’t care about how our data looks, we have to specify those traits when we create the table. Whether creating from scratch or from an import process through an external application (Excel, for example), we need to specify the data type for each column.
There are several key reasons why we want to do this.
In the case of numbers that will be summarized in some way (sum, average, minimum, maximum, mean, mode), we want SQL Server’s database engine to treat these as numbers internally so that it doesn’t have to convert anything, which in turn makes the calculations much faster.
The same goes for dates, times, and datetimes (where both the date and time is in one column) because the database engine understands date and time calculations, provided the data types are correct.
Text values are also very important but for a fundamentally different reason. While computers understand numbers, it’s humans that understand text.
We will focus the rest of this week’s discussion on storing strings in a database.
Imagine we are developing a database for international customers, and we need to support accented characters or an entirely different alphabet. Database systems use a catch-all term for this, and that is collation.
When we install SQL Server, we are asked what the “default” is, then we are presented with some arcane terminology which may be confusing, so we leave the defaults and click Next.
Collation has to do with how data is sorted, and thus the order in which we see it when data is returned.
Note that collation only affects text columns.
The Windows regional settings, for the user installing SQL Server, will affect the default collation of a SQL Server installation. If we were to install SQL Server on a machine that is configured with U.S. regional settings, it will have a very different default collation than a server that is set for Canada or Finland.
The default SQL Server collation for US regional settings (SQL_Latin1_General_CP1) may need to be changed to match what is required for the user databases that will be running on a server.
The above values mean the following:
General – the sort order follows 0-9, A-Z;
CP1 – code-page 1, the US English default;
Case Insensitivity and Accent Sensitivity are implied (see below).
When not using US English, or the Latin alphabet, we need to be aware that the data’s sort order is taken into account.
Even more confusingly, some vendor products require a specific collation for their database. For example, Microsoft’s own SharePoint database uses the collation Latin1_General_CI_AS_KS_WS:
CI – Case Insensitive – no difference between upper and lower case when sorting data;
AS – Accent Sensitive – distinguishes between accented characters, for instance, the Afrikaans words “sê” and “se” are considered different;
KS – Kana Sensitive – distinguishes between different Japanese character sets;
WS – Width Sensitive – distinguishes between characters that can be expressed by both single- or double-byte characters.
Now that we have a very basic grasp of collation, let’s look at text data types.
We tend to use only four text data types in SQL Server these days:
CHAR(n), NCHAR(n), VARCHAR(n), and NVARCHAR(n), where n may be a number between 1 and 8,000 or the keyword MAX.
For historic reasons, SQL Server set their data page size (the amount of storage available on each data page, including headers and footers) to 8KB many years ago. This means that the largest amount of data we can store on a single page is 8,192 bytes. Once we take away the header and the slot array at the end, we are left with slightly more than 8,000 bytes for our data.
When we store a text value, we need to decide if the characters can be expressed in a single byte or as double-byte characters (also known as Unicode, using two bytes per character). Alphabets like Kanji, Chinese (Simplified or Traditional), and Turkish, will require double-byte characters, for each character in their alphabet.
(Some code pages need more than two bytes for a character. That is outside of the scope of this discussion.)
So CHAR or VARCHAR uses one byte per character, while NCHAR and NVARCHAR uses two bytes per character (the N represents Unicode).
Thus, the longest a CHAR or VARCHAR string can be is 8000, while the longest an NCHAR or NVARCHAR string can be is 4000 (at two bytes per character).
MAX Data Type
In SQL Server 2008, several new data types were introduced, including the MAX data type for strings and binary data. The underlying storage mechanism was changed to allow columns longer than 8,000 bytes, where these would be stored in another section of the database file under certain conditions.
The MAX data type allows up to 2 GB (more than two billion bytes) for every row that column is used.
So we have to consider three distinct things when deciding how we store text: collation, Unicode, and string length.
Because my readers are terribly intelligent, you’ve already deduced that the VAR in VARCHAR means “variable length”, and you’d be correct.
We use VARCHAR (and its Unicode equivalent NVARCHAR) for columns that will contain strings with variable lengths, including names, addresses, phone numbers, product names, etc. In fact, along with INT (meaning a 4-byte integer), VARCHAR is probably the most common data type in any database today.
CHAR (and NCHAR), on the other hand, are fixed-length data types. We use this type for string lengths that are unlikely to change. For example, IMEI numbers, Vehicle Identification Numbers, social security numbers (where the dash forms part of the number), product codes, serial numbers with leading zeroes, and so on. The point here is that the length is fixed.
So why don’t we just use VARCHAR instead of CHAR everywhere?
Let’s start with why VARCHAR was introduced in the first place, and why we would use it instead of CHAR.
For columns with unpredictably long strings, we don’t want to reserve all 8,000 bytes per row for a string that may only take up 2,000 bytes—and end up wasting 6,000 (not to mention the storage required for a MAX column)—so we switch to VARCHAR, and each row only uses as many bytes as it needs.
However, SQL Server needs to keep track of the length of a VARCHAR column for each row in a table. There is a small overhead of a few bytes per row for every VARCHAR for SQL Server to keep track of this length. The reason we don’t replace CHAR and NCHAR outright, is ironically to save space.
It doesn’t make sense for a table containing millions or billions of rows to use VARCHAR for fixed-length columns because we would be adding on another few bytes per row as unnecessary overhead. Adding just one byte per million rows is roughly 1 MB of storage.
Extrapolating that extra byte to the memory required to hold it, maintenance plans when updating indexes and statistics, backups, replicated databases, and so on, we are now looking at extra megabytes, and possibly gigabytes, for the sake of convenience.
We must make sure that we pick the correct character type for storing strings, beyond just the length of the string. Both CHAR and VARCHAR have their place.
While we did spend most of this discussion on collations and text, we’ve only scratched the surface.
Next week, we will discuss how to pick the right data type for your columns, with concrete examples. This matters a lot with how numbers are stored.
If you have any feedback, find me on Twitter at @bornsql.
To answer that, we have to ask what a relational database management server (RDBMS) is.
To answer that, we have to ask what a relational database is.
To answer that, we have to understand what the relational model is, what a database is, and how these two concepts combine to form what is effectively the basis of most technology today: the database.
A database is, fundamentally, a collection of information that is stored in a defined structure.
A telephone book is a database. A recipe book is a database. Each of these books has a set of rules that define how the information is stored and retrieved. That is the structure.
When we want to retrieve information, we query the structure with language appropriate to the database. Tell me the phone number of Randolph West, by looking up the surnames and going through all the Ws. Find the recipe for lemon meringue by going through the desserts and searching for meringue.
In a RDBMS, the language is called Structured Query Language, or SQL. You can pronounce it like “sequel”, or say each letter.
A few decades ago, IBM employee and computer scientist Edgar Codd developed his “Twelve Rules”, dictating how data should be laid out in a relational manner, using first-order predicate logic.
There’s no really easy way to explain what this exactly means at a philosophical or mathematical level, especially not in this forum, so I will explain what makes a relational database instead.
Imagine you want to buy a new iPhone. You walk into the store, and find a Genius. Her name tag says Thandi, and she takes you to the desk where the various models are displayed.
You decide after a few minutes that you want to get the glossy black one with the big screen and lots of storage, because in this imaginary scenario, you have lots of disposable income.
You also select Apple Care on the phone, and for a little bonus, you get yourself a blue leather phone cover.
Out comes the credit card, the transaction is approved, and Thandi and you exchange Twitter handles and decide to be friends. Everyone is happy.
Here’s what happens from the relational perspective, in a high level overview:
According to Codd’s paper on relational theory (PDF), these items should be defined by their natural structure, and that there should be one way, and only one way, to uniquely identify each item in that purchase event, from you and Thandi, to the phone cover and Apple Care, and how they relate to each other to create a single, unique transaction.
This is called normalization. The transaction can only be recorded once, but in a way that includes all the different information.
Although Apple has sold billions of phones, there is only one product category in their stock called iPhone. It would be reasonable to assume that their database contains a table called Products, and possible values include Mac, iPhone, iPad and iPod. Models would be in their own unique list.
In the same vein, there is a single list of possible colours that these products can be: white, black, silver, gold, red, blue, rose gold. These would go into a database table called Colours.
There are only a few storage capacities they can be: 32 GB, 128 GB, 256 GB. There’s the Storage table.
There would also be a table of Stores, a table of Staff, and a table of Customers. Each possible item would appear only once in these tables.
To distinguish between each unique value in the table, each record, or row, contains a unique identifier, or what we call a Primary Key, or PK for short (sometimes called an identity column, but this is not always the case). This is usually a number, because computers process numbers very efficiently, but can be any data type (we will cover data types in the next post).
For example, in the Colours table, the primary key might be 2 for black. For Products, the phone’s identifier might be 19, because Apple created several product categories before the iPhone was invented. The Storage primary key might be 5. Thandi might have a Staff PK of 322,544. Your Customer PK value might be 477,211,549.
All this information so far doesn’t tell us anything about the transaction itself. To do that, we have to associate each of these Primary Keys from each table, into a single row in a Transactions table, where we record the date and time, along with the sale amount and GST.
This association is called a relationship, and this is where we get the notion of relational data. The data in the Transactions table only has meaning, when it relates to the other tables.
In other words, all of these elements can be used to uniquely identify the sale of the phone and the phone cover to you, provided they all link back to a unique transaction PK.
When we use the values of those primary keys in a table to refer back to the original table, we call them Foreign Keys (FKs). This is because the Transactions table would have its own PK, as mentioned above, which in our example might be named the TransactionID column.
Everything changed for SQL Server Standard Edition on 16 November 2016, and how memory limits work.
On that day, a slew of Enterprise Edition features made their way into editions across the board, including Express Edition and LocalDB.
The memory limit of 128GB RAM applies only to the buffer pool (the 8KB data pages that are read from disk into memory — in other words, the database itself).
For servers containing more than 128GB of physical RAM, and running SQL Server 2016 with Service Pack 1 or higher, we now have options.
The max server memory setting always did only refer to the buffer pool, but for many reasons there was misunderstanding from a lot of people that it included other caches as well.
Because ColumnStore and In-Memory OLTP have their own cache limits over and above the 128GB buffer pool limit, the guidance around assigning max server memory is no longer simple.
ColumnStore now gets an extra 32GB of RAM per instance, while In-Memory OLTP gets an extra 32GB of RAM per database.
With that in mind, you are still welcome to use the Max Server Memory Matrix and associated calculator script for lower versions of SQL Server (up to and including 2014), but I will not be maintaining it further, unless someone finds a bug.
How much should I assign to max server memory? It depends.
It would be very easy to spec a server with 256GB RAM, install a single instance of SQL Server 2016 Standard Edition (with Service Pack 1, of course), have 128GB for the buffer pool, 32GB for the ColumnStore cache, three databases with 32GB of RAM each for In-Memory OLTP, and still run out of memory.
This is a brave new world of memory management. Tread carefully.
If you’d like to share your thoughts, find me on Twitter at @bornsql.