Calling SQLite using Javascript

windows cmd line
INPUT

After a bit of google this and google that, I figured out how to submit an SQL call to SQLite.  This code uses node.js and SQLite3 to call a database I created called ‘sandbox.db’ that resides on a Windows machine  – the same setup that the majority of StoreBoss end users are likely to have running in their back office.

It took me a day to fgure out the basics of npm, node, and SQLite3. It’s not too bad, once you get the hang of it.

Next I will figure out how to create an .exe package using node and this js file.

After that, I will figure out how to repeat this process, but with added HTML and CSS files, such that I can display the data in the responsive custom tables layouts I’ve already created.  If I can get all this to work. then I will know that i can produce a compiled StoreBoss executable that runs on the desktop, no fuss, no muss — and without having to resort to some opinionated, overbearing framework..

Here’s how I did it.  I don’t have an error handler at the end, because this is just a demo. proof of concept kind of thing. Pay attention to where the closing paren is for the functions.  This is how I understood how callbacks work.  Now all I have to do is extract the call statements from this file, and place them in a main index.js one, and voila, that’s my ticket out of callback hell: forcing javascript to behave procedurally, which is what it does anyway, underneath all the async gunk.

 

// filename: sqlite.js
// author: gounaman

const sqlite3 = require('sqlite3').verbose();

var testdb = "sandbox.db";

function opendb(testdb) {

var db = new sqlite3.Database(testdb, (err) => {
if (err) {
return console.error(err.message);
}
console.log('Connected to SQlite database.');
}
);

return db;
}

var db = opendb(testdb);

function getrows(db) {

db.all("SELECT * FROM batch_vw;",
(error, rows) => {
rows.forEach( row => console.log(row));
});

};

getrows(db);

function closedb(db) {
db.close();
}

closedb(db);

windows cmd line
OUTPUT

And the real beauty of this is that I can now write SQL to my heart’s content without hacking it to pieces with ORM and sanitization.

Life is good again.

Advertisements

ODBC bindings for SQLite/Node.js

 

html page
The birth of StoreBoss

So here’s the latest on my plan to come up with the easiest, simplest way to write an intuitive, nice looking Woo-related browser app that’s able to connect in a very simple way to a local datastore on the desktop.

There is only one problem: unlike, say, C, JS has no built-in file I/O capabilities.

In the case of wanting to access SQLite, you have to use SQLite3 + node + npm,  All this to accomplish what can be done in one line of code in C or a C++ program that uses ODBC to connect to SQL Server.

And if you ever had to write a JS script that uses SQLite3, you will soon begin to rue the absolutely maddening error-prone tedium of the unnecessary complexities of error handling when using SQLite3.

All I want to be able to tell JS to (1) connect to SQLite then (2) open a DB then (3) retrieve data from a table.

This is not exactly rocket science. I don’t think the user would mind waiting for a split second while a few hundred rows are retrieved in blocking, synchronous mode.

But… no.

Instead of just being able to connect to SQLite synchronously, as you would if you were writing your app in C, you’re forced to deal with SQLite3 + node.js + npm in order to access SQLite.

Adding all these massive dependencies (over which you have little control, results in a situation where you have no idea what is really going on on your machine behind the scenes): massive security hole, obviously.

Why on earth was SQLite3 not built to also support synchronous I/O?

After all, remember the SQLite mantra: “open-sourcezero-configurationself-containedstand-alone… ” etc etc?

And remember, too, that the intended number of simultaneous end users is exactly 1. In this scenario, it is RIDICULOUS to be forced to make asynchronous calls to a local db.

Plus which, when it comes to SQLite3, does it ever mentioned (gulp)

MUTEX THRASHING

in the documentation? Well, does it, punk?

This NJS (NPM + JS + SQLITE3) Rube Golberg jive is the antithesis of that stated SQLite ideal, or am I wrong?

 

ruth golberg stamp

Then, I thought… why not ODBC?

For a moment, I allowed myself to fondly remember those MS Access days wayback in the 90s when I was able to use ADO to grab data from almost any source, no fuss, no muss, and thank you very much.

My mood began to turn increasingly dark, at the unfairness of Life, as I googled frantically.  Then, the clouds parted, and I found… this.

sqlite odbc driver

I found it!

An ODBC driver that’s MIT licensed and recommended by no less a reputable source than Microsoft.

I quickly added it to my ODBC data sources in the Administrative Tools of the Control Panel in Windows.

Libre Base

Then I tested it with Libre Base.  It worked, with some minor fiddling (see below), like a charm.

Then I tested it again, this time with Microsoft Power BI (which I noticed, by the way, is developing connectivity to Stripe as a data source).

Power BI connect

That worked fine, also.

So the driver ran as advertised.

But I was not out of the woods yet.

Since Javascript cannot work with ODBC directly for a variety of technical and security reasons, I had to find the….

MISSING LINK.

And I did.

On Github.

Here.

So all I had to do now is figure out how to get my JS StoreBoss DB function library to work like this.

ODBC bindings for node

Now that is what I call beautiful. Except that it turned out to be a really bad idea (see Postword below).

Postword:  After installing this complex JS chromium runtime extension (which requires you to use an elevated Admin CLI), I could not get the example code to work, and left a call for help in its git page.  I say bag this native extension stuff; too much of a headache. But if you can follow what Gabriel Schulhof says about native add-ons in this YouTube bid, please leave a comment!

UPDATE #1:  After several re-installs (since this native extension trashed my node/npm setup) of npm + node + SQLite3, I was able — finally! — to get node + sqlite3 to work together.  I tested this by entered my js statements into the node command line, one by one  Great way to debug connectivity issues to SQLite via SQLite 3, when you are first setting things up, btw.  End result: I was able to connect to my sandbox db on SQLite, and output a result set from a table I had created earlier and populated using DB Browser.  Tomorrow I will try to get node to run a simple .js file that selects data from SQLite and displays it in the console asynchronously. Meanwhile, time to study the Node.js chapter in Eloquent Javascript. The guy who wrote this classic has deep reservations about node, btw.

UPDATE #2:  Hello, hello.  What’s this?  Let’s first look at the Trouble Shooting guide.  On Windows, as opposed to UNIX derivatives, things are more complicated (as always).  Let’s see… hmmm.

  1. Start an Admin PowerShell: Right-click the start icon, then pick Windows PowerShell (Admin)
  2. Install both vs2015 and vs2017 libraries. Each line will take ~5-10 minutes.
    npm install --global --production --vs2015 --add-python-to-path windows-build-tools
    npm install --global --production --add-python-to-path windows-build-tools node-gyp

That is what I had to with the native extension, and it trashed my setup.  I’ll not be installing anything again that forces me to use Admin privileges, so bag that.  Too dangerous, and what if an end-user has to change his or her machine’s configuration to get this package to run?  StoreBoss would be dead in the water. Troubling issues page, too.

UPDATE #3  The odd thing is, Javascript is actually synchronous at its core.  Promises and Callbacks were contortions that were added later, to avoid code having to wait for things like  long-running db queries to return results. Async/Await is Promises syntactic sugar that aims to make this mess more human-readable.  See Hopehne articles here and here, with lots of useful background reading references included.

The Great Leap Forward

 

stripe schema
v .01

Here is an ERD diagram of 2 important STRIPE tables that jumped out as being of immediate interest.

There are approximately 50 tables in the SIGMA SQL interface (I don’t know how these map to actual tables on Stripe’s production backend). I’m definitely not going to dupe them all.

Finding the table that represents the master Stripe Account (where store owner funds are deposited before transfer) is next.  This will give me a readout of the balance-to-date in Stripe.

Just as a reminder, I am primarily interested in capturing relevant data about money flows of a WooCommerce account over time,  That’s it.  No need to get bogged down in extraneous Stripe detail.

A StoreBoss owner would typically be interested in the following questions:

How much money is in my Stripe account right now?

How much of it is available to me? (ie, net of any holds by Stripe on my money?)

How fast is Stripe transferring the money to my bank?

What is Stripe charging me for this service?

Is Stripe dropping any sales transactions that I made through WooCommerce or getting any sales amounts wrong or miscalculated batch amounts?

What about refunds/chargebacks?

Once I have the tables that can help provide answers to these questions set up in my StoreBoss XAMPP dev env, I’ll start to look at the query templates that Stripe has made available and maybe translate these too in SQLite speak.

I am not going to spend any time customizing the canned Sigma queries, as I have made the decision that StoreBoss is going directly retrieve data via the Stripe REST API.

Doing so would allow me to load the data off Stripe without Sigma’s 2 or 3-day delay (and save the end user the $40/mo). Eventually, I’m going to have to do the same with Bank info.

I’ve changed my mind about integrating these sources of data on the desktop.

Companies like Leapfin already do similar sort of things on the cloud.

Leapfin, of course, is targeted at large companies, with beaucoup dollars — leaving typically thrifty Woo retailers who also need automated Cash Reconciliation functionality somewhere out to the left of Siberia.

So if I can develop a WP plug-in that does a decent job of transaction matching and offers some flashy temporal analytics, then I think StoreBoss might be of some interest to the Woo customer base.

And even if not, I will still use it for my own Woo e-commerce store, and save myself a lot of tedious checking every month for the state taxes, and at year’s end when it’s time for the Federal.

Hard#core Woo

I attended part of the WooCommerce Community monthly slack today.  First time for me.  Attendees seemed quite knowledgeable.  Mix of (apparently really flush) store owners and highly-experienced consultants and WC dev representation was there too.  There was no formal agenda but the idea seemed to be to talk about the next release of WC.

After some talk about small, focused, and frequent releases going forward, the conversation soon turned to flat file custom tables and the Orders and Products flat file plug-ins (they are on Git, I added them to my list of links that you can find on the menu).

There was concern re these plugins that (1) bugs were not being fixed and (2) would they still work if Core changed.  Everyone comes from a different POV; mine happens to be a DB one.  I asked a couple of questions, but they were not answered — except for my first one, which was, did WC plan on releasing any Admin Panel-facing Gutenberg blocks?

One of the developers answered “no,” but added that he was not exactly sure what I meant.  Well, I am not sure why that was unclear, but I let it go.  Later, I emailed the moderator a list of questions: (answers in italics)

1.  Assuming by “flat” files you guys mean denormalized tables, is the intent to have a separate DSS env for WOO? (we can ignore for now the issue that the WP/WOO schema is not in any sense 3NF). UPDATE: YES
2.  If yes, is the intent to have it be able to live in a separate database, but on the same server? UPDATE: WP does this already  / unclear how….
3. Is there any long term intent to allow for things like application-level sharding in a distributed DSS/OLTP env? UPDATE: No
4.  When the Woo team develops a new release, do they test against both Maria and MySQL?  If so, do they test against the latest versions of each? Are such test results made public? UPDATE: Yes, they will look into making public
5. If separate OLT/DSS (or custom tables, to use WP terminology) env are contemplated. have any stress tests been made to see if DB triggers would work (despite the overhead) within acceptable performance goals to keep the two envs in sync in real-time? Again, is there any documentation as to same? UPDATE: NO, but looking into it
6. Finally, with maria 10.3+ deploying temporal data capability, is there any intent to support that sort of functionality in Woo going forward. UPDATE:  NO

Stripe’s Σ: 1st Impressions

stripe sigma

Matching information from one’s credit card processor to your WooCommerce installation can be pretty dull — unless it’s done automatically.

This sort of problem is an example of a temporal, meaning time-related, application — a subject that I have become somewhat passionate about of late.

Time is it, at least for me.

Not products.  Not sales.  Not customers.  Not fulfillments.  Not subscriptions. Not SEO. Not the latest framework drama! Just time.

Time, pure and simple.

Imagine if a database were constructed entirely as a function of time.

It’s possible.  Time dimensional databases are old hat.  I am not talking about that. And I’m not only referring to notions of system time vs application time: so-called bitemporal data.

Instead, imagine a database where the primary unit of currency, somehow, is time itself.

Time is the one alluring resource that’s both infinite and non-renewable — depending on one’s perspective.

An e-commerce store, or better yet, a Wall Street issue can be thought of as clock automata that perform a linear traverse through a temporal state machine: a security is traded, then goes through the settlement process.

During this span, events may or may not happen.  These events may or may not reach their ultimate conclusion.  Time may even take on the characteristics of being a non-goal oriented phenomenon.

What is really being sold on Wall Street is in fact time. Today, the time left before the security matures is worth 10 dollars.  Tomorrow, it may be worth only five.

A football game with only 3 mintues left could be worth a touchdown, rarely more.

But in the hands of a skilled quarterback, 60 seconds might be just enough to win the game.  And the stakes get higher with each passing second, which, in effect is worth more than the previous one.

Time may remain suspended in a state of incompleteness, or even reverse course.

Or it may veer off into some other direction, expected or not. Computational linguistics has been abuzz of late on these issues.

Let’s come down from the clouds.

Ideally, a WP plugin that purports to do the job of matching sales and credit card payments correctly and efficiently would be able to retrieve data in real-time from, say, a Stripe server, and match it (or not) to existing WooCommerce Order information.

It should also be capable of doing the same with bank information, which also travels through time, but at a different velocity.

Meanwhile, the credit card processor may put a lock on a merchant’s funds, due to suspicious activity, thus lowering the monetary value of the store owner’s version of time.

Does Stripe’s Sigma product make this matching process easier, or faster?

I took a look today at Sigma, on a trial basis, and my verdict is… possibly.

However, it’s important to keep in mind that Sigma’s data is 2 days old; thus the product may not be ideal if a store owner needs for whatever reason to keep an accurate picture of his or her cash position on a near real-time basis.

Moreover, Sigma does not really give you a fully-enabled  SQL tunnel into a store owner’s transaction portfolio.

Sigma supports a modified SQL shell that allows read-only access.

I tried doing a few standard things in SQL but they turned out to be pretty much unsupported.  It kind of reminded me of the constrained version of CSS and HTML that WP.com makes available to its end users.

I’m going to finish this post the lazy way, simply copying and pasting my notes on the hour I spent earlier today trying out Sigma, while comparing its output to what you can get from downloading Stripe transaction data (which is more up to date).

In the regular Stripe dashboard (non-Sigma interface) you can point-and-click on a slick drop-down to customize the columns you wish to download.

The ALL columns default layout for the Stripe CSV payments export file is

1 ID,
2 Description,
3 Seller Message,
4 Created (UTC),
5 Amount,
6 Amount Refunded,
7 Currency,
8 Converted Amount,
9 Converted Amount Refunded,
10 Fee,
11 Tax,
12 Converted Currency,
13 Mode,
14 Status,
15 Statement Descriptor,
16 Customer ID,
17 Customer Description,
18 Customer Email,
19 Captured,
20 Card ID,
21 Card Last4,
22 Card Brand,
23 Card Funding, 24 Card Exp Month,
25 Card Exp Year,
26 Card Name, 27 Card Address Line1,
28 Card Address Line2, 29 Card Address City,
30 Card Address State,
31 Card Address Country,
32 Card Address Zip,
33 Card Issue Country,
34 Card Fingerprint,
35 Card CVC Status,
36 Card AVS Zip Status,
37 Card AVS Line1 Status,
38 Card Tokenization Method,
39 Disputed Amount,
40 Dispute Status,
41 Dispute Reason, 42 Dispute Date (UTC),
43 Dispute Evidence Due (UTC),
44 Invoice ID,
45 Payment Source Type, 46 Destination,
47 Transfer,
(a) Interchange Costs, — missing from csv file
(b) Merchant Service Charge, — missing from csv file
48 Transfer Group, 49 PaymentIntent ID
50 Customer Name (empty)
51 Customer Email (empty)
52 Customer Name
53 Customer Email
55 Payment Intent ID ( = order_id col on csv file = WC generated wp_post_ID)

If you check only the 7 cols below, Stripe silently adds a few columns extra such
as customer name and email.

Nevertheless, you can easily get a three-part match with WC generated trans data, which I showed in an earlier post.

1 Stripe ID Match criterion 1
4 Created
5 Amount Match criterion 2
10 Fee
11 Tax
14 Status
55 Payment Intent Id Match criterion 3

the above does not provide a “net” payment col (meaning from the free admin panel), but Sigma (which is not free) does

Sigma amt results format are quite funky
why is amt a bigint? so that payment can be persisted in the smallest currency unit — to make math and currency conversion easier?

sigma

Sigma’s sql editor does not change table context depending on which table you click on the table list in the left panel in the Sigma SQL “editor”
no live syntax checking of sql
if you type ‘xelect’ instead of select you get an error

err msg: Malformed query: line 2:3: no
viable alternative at input ‘xelect’

hmmm… Cassandra… is that you? a nosql back end for sql queries? i dont know Cassandra SQL, but this might explain a lot

including why queries are noticeably slow

sigma stripe

very annoying subset of SQL (see screenshots)
cast of bigint to decimal does nothing

Tooltip also gets in the way, though it was a nifty data dictionary idea at first.  I did spend the time trying to figure out if you could turn it off.

I basically played around with the transaction table, trying simple stuff like this

select id, cast(amount as decimal(13,2)),  // did not work
available_on,
created,
cast(fee as decimal(13,2)), cast(net as decimal(13,2)),  // nope
type,
source_id
from balance_transactions
where MONTH(created) = 8;

// division not properly supported, depending on the operands

fee / amount returns 0

but 10 / 5  returns 2

so I could NOT get SQL to verify the haircut (what Stripe charges)

First look bottom-line conclusion: Sigma is definitely going to allow me to get a quick grasp of Stripe’s API. There’s also a useful manual here. But in view of some the considerations listed above, this is probably not the ideal tool for an SQL power user, yet paradoxically would be too much for a regular store owner to handle.

Moreover, the 2-day lag is a killer. I do think a non-technical user would initially find it a bit confusing to write legal SQL on the fly, then see error messages pop up. Finally, remember that Stripe sees only its own data (plus some geospatial stuff provided by outside vendors).  For the whole picture, you would need to go elsewhere.

I am going to keep going with Sigma, read the manual, and see if some of my early conclusions need to be revised.

PS:  reading manuals is nice, but I prefer to hit the catalog to see what I’m working with.  Based on the Cassandra theory, I tried to run this, but the query was “rejected”:

SELECT columnfamily_name FROM system.schema_columnfamilies WHERE keyspace_name = 'keyspace name';

I am going to look right off the bat for a visual Sigma database schema in the manual.  If an ERD of some sort does not exist, I shall lament the time I am about to waste creating one.

THIS TIME AND THE TIME BEFORE THAT

stripe payments file
Structure of sb_etrans table – it mirrors the redundant Stripe payment output file almost exactly while adding a Created_localtime descriptor.

Time is a central concept everywhere — in e-commerce, as it is in life. Some data architects have even argued that non-temporal data is a “trivial extension” of temporal data.

Knowing how long a transaction takes or fails to complete its entire life cycle can produce insights into the order flow pipeline, as well as facilitate reconciliation.

Similarly, knowing what time of day, or what day of the week, or what month of the year that customers shop most, can give store owners the ability to discern purchasing patterns that would otherwise go unnoticed.

Unfortunately, obtaining normalized, temporal information is not always the easiest thing to do.

A credit card processor may use UTC time to indicate when a transaction took place. UTC time is convenient in terms of data processing, but it does very little to enhance a merchant’s understanding of localized temporal shopping trends.

It would be simplistic to say that a one statement solution would be sufficient to convert everything to normalize.  An example of a naive temporal query is as follows:

select UTC_TIMESTAMP() - now();

A WC-plugin programmer has to infer a merchant’s actual timezone if he or she does not know in advance where the application will run.

Algorithms that take into account a global merchant’s entire temporal ecosystem can get quite complex.

Such a merchant may have customers located in any time zone around the world, or have a geo-distributed platform, in which time zones support daylight savings times vary.

There is also always the possibility — however remote, but a consideration that a completist developer would take into account — that an operating system may be set to one-time zone, the RDBM(S) running under it to another (which can be further complicated by distributed master-slave server deployments), and the runtime programming language (such as PHP) is defaulting to yet another TZ setting.

It is therefore obvious that some means must be found to determine events in equivalent time: for example, the local (to the customer) time that a customer purchased an item, and the local (to the merchant) time that funds became available in their treasury account(s).

My goal in this post is to create a simple function CONVERT_UTC_2_LOCALT that converts a UTC time value that is found in Stripe’s payment file to a localized one.

Here are the (ideal) requirements:

1. It must convert UTC time for the current year (at time of coding, this is 2019)
2. It must also convert UTC time for the previous year
3. The conversion must work irrespective of an end user’s local timezone
4. Conversion must take into account DST (in the US, and, after testing, in Europe and perhaps elsewhere)
5. If UTC values are outside this range, then the local time bucket is set to being identical to UTC
6. The entire function must be written in SQL
7. Must be tested against a sufficiently rich data set for performance evaluation

Here is a read-only, key | value table that is useful as a fast lookup of hardcoded data values.

CREATE TABLE sb_temporality (
sb_meta_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
sb_meta_key VARCHAR(255) NOT NULL,
sb_event_marker_UTC DATE NULL,
sb_ordinal_date_UTC SMALLINT NULL, -- aka "Julian date"
PRIMARY KEY ( sb_meta_id ) );

CREATE UNIQUE INDEX sb_tempindx ON sb_temporality(sb_meta_key);

Here’s the data contained in this “helper” table.

// 2019 DST rows

INSERT sb_temporality
VALUES(
0,
'2019_start_dst_us',
'2019:03:10 02:00:00',
DAYOFYEAR('2019-03-10')
);

INSERT sb_temporality
VALUES(
0,
'2019_end_dst_us',
'2019:11:03 02:00:00',
DAYOFYEAR('2019-011-03')
);

// 2018 DST rows

INSERT sb_temporality
VALUES(
0,
'2018_start_dst_us',
'2018:03:11 02:00:00',
DAYOFYEAR('2018-03-11')
);

INSERT sb_temporality
VALUES(
0,
'2018_end_dst_us',
'2018:11:04 02:00:00',
DAYOFYEAR('2019-011-04')
);
storeboss table
The extensible “temporality” table – does it look familiar?

Note that rows can easily be added to support additional time zones, or other sorts of calculations that need quick lookup values; for example, to support domains that require, say, decimal values or other data types.

Here’s the stored function.

DELIMITER //

CREATE OR REPLACE FUNCTION CONVERT_UTC_2_LOCALT(Created_UTC DATETIME) 
RETURNS DATETIME DETERMINISTIC
BEGIN

/* variables */

DECLARE now, maintenant,
local_t DATETIME;

DECLARE curr_start_dst,
curr_end_dst,
prev_start_dst,
prev_end_dst SMALLINT;

/* initialization */

SET now = now();
SELECT sb_ordinal_date into curr_start_dst
from sb_temporality where sb_meta_key = '2019_start_dst_us';
SELECT sb_ordinal_date into curr_end_dst
from sb_temporality where sb_meta_key =  '2019_end_dst_us';
SELECT sb_ordinal_date into prev_start_dst
from sb_temporality where sb_meta_key= '2019_start_dst_us';
SELECT sb_ordinal_date into prev_end_dst
from sb_temporality where sb_meta_key = '2019_end_dst_us';

/* mainline */

-- current year, within DST interval

IF ( YEAR(Created_UTC) = YEAR(now) ) 	
AND ( DAYOFYEAR(Created_UTC) BETWEEN curr_start_dst AND curr_end_dst )
THEN SELECT (CONVERT_TZ(Created_UTC,'+00:00', LEFT(timediff( now(), utc_timestamp() ), 6)) )
into maintenant;
END IF;

-- current year, outside DST interval

IF ( YEAR(Created_UTC) = YEAR(now) ) 	
AND ( DAYOFYEAR(Created_UTC) NOT BETWEEN curr_start_dst AND curr_end_dst )
THEN SELECT (CONVERT_TZ(Created_UTC,'+00:00', LEFT(timediff( now(), utc_timestamp() ), 6))  - INTERVAL 1 HOUR )
into maintenant;
END IF;

-- previous year, within DST interval

IF ( YEAR(Created_UTC) = (YEAR(now) + INTERVAL -1 YEAR ) ) ) 	
AND ( DAYOFYEAR(Created_UTC) BETWEEN prev_start_dst AND prev_end_dst )
THEN SELECT (CONVERT_TZ(Created_UTC,'+00:00', LEFT(timediff( now(), utc_timestamp() ), 6)) )
into maintenant;
END IF;

-- previous year, outside DST interval

IF ( YEAR(Created_UTC) = (YEAR(now) + INTERVAL -1 YEAR ) ) 	
AND ( DAYOFYEAR(Created_UTC) NOT BETWEEN prev_start_dst AND prev_end_dst )
THEN SELECT (CONVERT_TZ(Created_UTC,'+00:00', LEFT(timediff( now(), utc_timestamp() ), 6)) - INTERVAL 1 HOUR)
into maintenant;
END IF;

RETURN maintenant;
 
END 

//

Now all you have to do is run the following query.

UPDATE sb_etrans
SET Created_localtime = 
CONVERT_UTC_2_LOCALT(Created_UTC);

In this implementation, CONVERT_UTC_2_LOCALT(datetime) handles DST and non-DST intervals for “-0500” UTC offsets (which includes EST). It performs temporal normalization on transactions that were generated during 2018-19. Thanks to the sb_temporality table, it’s a straightforward exercise to extend the TZ conversion to other countries or date ranges.

Here’s the before and after.

CONVERT_UTC_2_LOCALT(datetime) greatly simplifies normalizing UTC, and can easily be added to a script that loads Stripe data to the appropriate StoreBoss table on the WP server.

 

Jumping Ahead – Handling Daylight Savings Time

StopreBoss Data Flows
Figure 1: StoreBoss Data Flows

StoreBoss™ relies on three important temporal data sources:

  1. the WooCommerce (WC) DB;
  2. Stripe transactional data; and
  3. a WC store owner’s bank Statement of Account.

StoreBoss performs a “tick-and-tie” matching operation between these sources of transactional data. Discrepancies may arise due to differences in time settings.*

DST adds a further complication, by causing transactional time to be shifted on recurring intervals, depending on which TZ a store owner has set for his or her WP installation.

Per Stripe’s documentation,

 

TIME ZONE DIFFERENCE

All payments and payouts are processed according to UTC time. As a result, the processed date may not be the same as your local time zone.

 

A desirable solution is to normalize the Stripe charge date by extending MariaDB (or MySQL) with a stored function that automatically and consistently performs a correction in all use cases.

By applying this function to produce a new temporal value (which is persisted in a custom descriptor during file loads), StoreBoss can calculate normalized lag times between the various servers that produce transactional data.

Thus, StoreBoss will be able to more accurately verify a store owner’s Cash Balance Ledger position.

Let’s get into some technical details.

Assume you have downloaded from Stripe the previous day’s transactions, and manually load it to a work table whose structure matches the file’s.  While creating the necessary DDL, you might notice that a field named “Created(UTC)” is used by Stripe to indicate when news of the transaction was conveyed to your bank.

After uploading the CSV file (which happens to contain 1 record) to a work table on Maria whose structure matches this file, you decide to create a view that contains a column that shows the time of the event in local units. The code for such a view that produces the local time translation for any global time zone would look something like this.

CREATE VIEW IF NOT EXISTS
`sb_etrans_vw`AS
SELECT 
...
Created_UTC AS 'CreateD UTC',
CONVERT_TZ(Created_UTC,'+00:00', LEFT(timediff( now(), utc_timestamp() ), 6) )
AS 'Event instance',
...
FROM sb_etrans;

As you can see below, this code does its job perfectly well — except for two fatal flaws… it does not check to see if a transaction falls within the DST interval for your timezone, whose start and end date (expressed as a number, not as a weekday) changes yearly.

For example,  the second Sunday of March in 2019 was March 10, and the first Sunday of November will be the 3rd. In 2018, these dates were March 11th, and 4th.

storeboss utc testing

 

To facilitate its use, the modified code could be encapsulated, as follows:**

CREATE VIEW IF NOT EXISTS
`sb_etrans_vw`AS
SELECT
...
Created_UTC AS 'CreateD UTC',
CONVERT_UTC_2_LOCALT(Created_UTC)
AS 'Event instance',
...
FROM sb_etrans;

We shall be exploring in some depth CONVERT_UTC_2_LOCALT.

In future posts, I will explore Stripe’s Sigma product, which allows WC store owners) to perform extensive data analysis.

Sigma may be a preferred way for SQL-savvy end users to obtain needed data from Stripe — without having to go through Stripe’s REST API and JSON file decoding and OAuth authorizations.

To be sure, Sigma is not free; to many SMB WC storeowners, its price point — particularly in view of Sigma’s significant data lag — may not be worth $500 a year.

Moreover, the hurdle of its SQL interface can be quite intimidating to a nontechnical end-user.

Having said all that, let’s write the UTC conversion function for our manually-downloaded CSV STRIPE payments file.

*UTC has been in the news of late!

**Realistically you would only have to account for the last 2 years of transactions, on a rolling basis, as this is typically a banking archive limit