There are several SQL problems that can be solved with a table of numbers or dates.
This is a simple one-column table with numbers from 1 and up. Typical common problems
solved by with such a function:
o Show me sales for all months, including months for which there were no sales.
o Iterate over all characters in a string in a set-based statements.
o Any other query which you need to drive with a consecutive seres.
It's easy to construct and fill such a table. However, there are a couple of problems
with it:
1) If you query needs more numbers than there are in the table, you
get incorrect results.
2) Same thing happens if some smartass delets rows in the middle - or adds
a zero to the table.
3) There are queries where the optimizer does not know what the table is for
and makes incorrect assumptions.
The latter calls for some explanation. Assume this function:
CREATE FUNCTION inline_split_me(@param nvarchar(MAX))
RETURNS TABLE AS
RETURN(SELECT ltrim(rtrim(convert(nvarchar(4000),
substring(@param, Number,
charindex(N',' COLLATE Slovenian_BIN2,
@param + N',', Number) -
Number)
))) AS Value
FROM Numbers
WHERE Number <= convert(int, len(@param))
AND substring(N',' + @param, Number, 1) = N',' COLLATE Slovenian_BIN2)
This function is used to convert a comma-separated list of numbers to a
table. If I say.
CREATE PROCEDURE get_company_names_inline @customers nvarchar(2000) AS
SELECT C.CustomerID, C.CompanyName
FROM Northwind..Customers C
JOIN inline_split_me(@customers) s ON C.CustomerID = s.Value
go
EXEC get_company_names_inline 'ALFKI,BONAP,CACTU,FRANK'
and have one million rows in my Numbers table, the optimizer thinks that
the Numbers table will return 300000 rows. Had Numbers been a
built-in table, the optimzier would have known the hit rate exactly.
Here is a further list, courtsey of Jeffery Moden:
Here are several things that I use it for.
1. Continue to use it for a splitter because the STRING_SPLIT function doesn't return element positions nor guarantee any order.
2. Luhn mod 10 checksum for credit card, bank routing, and other similar checksum algorithms.
3. Relational multiplication where rows must be replicated based on a value within the row.
4. Solving complex problems such as many different overlapping dates, currently active rows, distributions of values over multiple months, etc, etc.
5. Using it as a read-less "Pseudo-Cursor" (a row source with no SQL Level loop, every SELECT is a "Pseudo-Cursor") to generate millions of rows of test data in a very easy and intuitive manner in just seconds.
6. Many different string functions including numeric base translations, "initial caps", "data purification" functions such as returning all alpha or all numeric digit strings or removal of accidental control characters, unconventional splitters such as what I used to solve the conversion of million row Adjacency Lists to Nested Sets in less than a minute instead of days using the old push-stack method, removal of duplicated characters, special numeric checksum and encoding for GS1/Uuc/Ean-128 and other encoders/decoders, etc, etc, etc. There are several other non-delimited, character based places where performance would have suffered greatly had it not been for a Tally Table or Tally Function (numbers or sequence table or function for many of you).
That's certainly not the limit of what I used it for. I just can't remember all the places I've used it. It's one of those "imagination limited" tools where, to a good imagination, it's an unlimited tool. Like I said, I even use it as a simple "readless row source" to power some things that don't even need a counter but still needs to act as a high performance loop. Heh... and ask Itzik Ben-Gan about all the things he's used it for.
It would be really nice if it were a built in function that operated at machine language speeds, hopefully even faster than the "pseudo-cursors" that loop behind the scenes in every Insert, Select, Update, and Delete.

Erland Sommarskogon 10/9/2016 at 11:30 AMhttps://www.postgresql.org/docs/9.6/static/functions-srf.html