A Data Panel engine capable of dealing with both sql and json data, taking the advantage of relational DB features and JSON structured data.
Its unique approach makes easier and much, much faster to express complex operation, as well as to deal with deep indexing.
| Server version: | 1.3m |
| Started on: | 07/11/2025, 11:32:05 |
| Base URI: | ️https://drs.hbay.it/ |
| DB URI: | ️https://drs.hbay.it/db/ |
| Admin Backend: | ️active |
Running as a service (systemctl)
Dataset structure, URI and file system level
Request env, dataset environment variables
drs, the Data RESTorage helper for operation files
[DATA] drs.list( [ schema [, entity ] ] )
[DATA] drs.set( cell [, content ] )
[SQL] drs.sql( SQL-statement [, values] )
[SQL] drs.insert( tableName, data )
[SQL] drs.checkSqlTable( tableName [, tableCreationSql] )
[STATIC] drs.dump( path [, content] )
[STATIC] drs.dumpList( [path] )
[TRANSACTION] drs.send( what [, ContentType ] )
[UTIL] drs.sub( params, data )
Dealing with sub-operations: caveats
[UTIL] drs.detach( params, data )
[UTIL] drs.fetch( URL [, RequestInit] )
[UTIL] drs.import( URL [, RequestInit] )
[UTIL] drs.console( whatever {, whatever} )
[UTIL] drs.shelf( [key, [value]] )
[UTIL] drs.encrypt( clearText )
[UTIL] drs.cryptVerify( encrypted, clearText )
[UTIL] drs.exec( command, [options] )
[UTIL] drs.extension( extensionName )
npm install.env filenpm startThat's it.
To install Node v 23 on a *nix system: curl -fsSL https://deb.nodesource.com/setup_23.x | bash
Windows was never taken in account, a *nix system is required.
If you want to reset everything to factory values:
.env fileLet's say we install the server with user "drs" in path "/home/drs/datarestorage/server".
curl -fsSL https://deb.nodesource.com/setup_23.x | bashdrs/home/drs/datarestorage owned by drs/home/drs/datarestorage/server owned by drs/home/drs/datarestorage/server/, necessary files are:package.jsonsrc/assets/.env*.jsREADME.*.env, or do it in service's environment if you use systemctl. /home/drs/datarestorage/server/ directory, run: npm installCreate the file /etc/ststemd/system/drs-server.service, with content:
[Unit]
Description=Data RESTorage Server
[Service]
User=drs
Environment="THIS_SERVER_WEB_URI=https://my.external.uri/"
Environment="THIS_SERVER_PORT=3065"
WorkingDirectory=/home/drs/datarestorage/server
ExecStart=/bin/sh -c 'npm start > /home/drs/datarestorage/log.txt'
SuccessExitStatus=143
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Now run:
systemctl daemon-reloadsystemctl enable drs-serversystemctl start drs-serversystemctl status drs-server, cat /home/drs/datarestorage/log.txtDone, service is active.
You can read this page at https://drs.hbay.it/
One of the most important aspects of Data RESTorage is that it's not designed to support one application or set of applications.
On the contrary, it comes out with the concept of “dataset”.
A dataset is an isolated world, a data-space, a private environment for a web app or a group of web apps.
Actually DRS isn't just a plain backend engine, it's rather a factory of backend engines.
One single DataRESTorage instance can support plenty of independent, isolated web applications (or sets of web applications).
A dataset:
There is a hierarchy under dataset, helping to keep order among both config and data files.
For both data and config items (i.e.: both memory cells and operations), the hierarchy is: schema + entity.
Conceptually schema is the equivalent of a realm (database for data, group of operation range for config), while entity is the type (table for data, operation targets for config).
All operations and memory cells (individual data items) must follow the hierarchy.
This hierarchy affects both URIs and file system under dataset's roots. At URI level:
At file system level:
Optionally datasets can be provided with their own authentication.
Dataset owners can also manage their own dataset security using the setup GUI.
If/when authentication for X-Header or Username/password is processed, the operation receives authenticated user (if any).
In this case it's also possible to attach some "flags" (key/value) to the user for application purposes (e.g.: group, role, language)
A URI is something like: https://drs.hbay.it/db/<dataset>/<schema>/<entity>/<operation>
The https://drs.hbay.it/db/<dataset>/ portion isolates the dataset, while the following <schema>/<entity>/<operation> is under dataset's control.
We can specify blocked URL(s) using strings like:
<schema>/<entity>/<operation><schema>/<entity>/<schema>/When requests start with a blocked URL they're rejected, regardless of any other consideration, setting or parameter.
Use this option if you need a few private URLs only accessible by operation files, invoked with drs.sub() (e.g.: maintenance, shared routines).
Processed right after Blocked URLs, they use the same logic - but inverted.
When requests start with a free-access URL they're accepted without processing authentication.
Use this option if you need a few URLs accessible before authentication occurs, e.g.: for login purposes.
This authentication provides a custom X-header whose name can be defined by server admin or by dataset owners.
Of course this can be used as a simple plain API key, on the other hand…
It's relevant to say that its definition includes one or more users, identified by their own value of the X-header.
It's not a simple anonymous API key, it can be used to identify users and/or web applications - don't forget users' flags.
It's the classic "Authorization: Basic" HTTP(S) schema.
A remarkable point: this is the only authentication method supported by setup GUI, if it's running.
A static (react) setup GUI is available.
It can work as soon as the server starts with a valid ADMIN_BACKEND_URI environment variable.
Unless this key (or with an empty value) the backend support for setup GUI isn't loaded at all.
By default, it's accessible at server level by "admin" user, whose password is set with ADMIN_DEFAULT_PASSWORD environment variable.
In case you forget this password access setup JSON, delete the related password value, reset ADMIN_DEFAULT_PASSWORD variable and restart the server.
In addition, datasets owners can access the setup GUI as well:
In this case their access will be restricted to their own dataset management.
Each endpoint follows a URL like: https://drs.hbay.it/db/<dataset>/<schema>/<entity>/<operation>
It could be convenient to ReverseProxy your webapp's /api/ location to http://localhost:3065/db/<dataset>/.
Operation REST entry points are accessible using both GET and POST.
Using both GET and POST it's possible to set params.output by adding the appropriate extension, .json or .xml
See:
When POSTing provide in request body (type: application/json) a JSON object:
https://drs.hbay.it/db/ if you provide them all.https://drs.hbay.it/db/<dataset> if you specify only schema, entity and operation, and so on.Using POST we typically send a JSON object with both params and data keys.
While data contains actual operation data, params is intended to specify the type of request.
It's a javascript object with many optional keys, some directly managed by Data RESTorage.
Even if a params isn't specified in the request, there will always be a params object in the context of the operation script.
Request-specific keys, managed by user:
output: {"xml"|"json} Output format, defaults to "json"dry_run: {boolean|number|"true"|"false"|"Yes"|"Y"|"No"|"N"} Set to true or deleted after backend analysis.
dry_runcan also be detected by a "dry_run" key indata, with a value like {boolean|number|"true"|"false"|"Yes"|"Y"|"No"|"N"}
If you want to avoid dry_run detection indata, simply add it with the appropriate value inparams
Mixed, can be in user's request and are also managed by backend:
dataset: {string} dataset name, overwritten if derived from URIschema: {string} schema name, overwritten if derived from URIentity: {string} entity name, overwritten if derived from URIoperation: {string} operation name, overwritten if derived from URIStrictly managed by backend:
auth_type: {string|false} Written/deleted by backend depending on dataset configuration.auth_user: {string} Written/deleted by backend, reports user authenticated if any.auth_flags: {Record<string, any>} Written/deleted by backend, reports optional flags associated to an authenticated user.Other keys are reported and simply ignored, a pure by-pass for operation script.
"env" request node is automatically populated by the backend before passing the context to the operation script.
Each dataset can have a set of environment variables reported in "env", following the classic key/value schema.
Such variables are useful to report parametric values that could change from one server to another, e.g.: some service URI.
Each operation is a file under dataset configRoot folder named <schema>/<entity>/op-<operation>.mjs
This file exports a default method receiving two input parameters:
context: the whole information about the request (params, data, environment)drs: a helper object allowing the module to interact with the Data Panel application.Everything is easier if you download https://drs.hbay.it/DrsOperation.d.ts
This provides DrsOperation.d.ts interfaces file, that could be conveniently placed at dataset directory level.
It will be something like this (schema: test, entity: test, operation: echo):
<configRoot>/
DrsOperation.d.ts
test/
test/
op-echo.mjs
Our echo operation is a typical operation file named op-echo.mjs:
/** @type {import("../../DrsOperation").DrsOperation} */
const myOperation = async (context, drs) => {
// Perform echo of context input
return(context);
}
export default myOperation;
The default export is the method that will be invoked.
Here's an example of output of the above operation.
Request: https://drs.crossbow.it/db/test/test/test/echo?foo=bar&dry_run=1
Response:
{
"params": {
"dataset": "test",
"schema": "test",
"entity": "test",
"operation": "echo",
"auth_type": false,
"dry_run": true
},
"data": {
"foo": "bar",
"dry_run": "1"
},
"env": {
"dataset": {
"httpUri": "http://localhost:3065/db/test/",
"webUri": "https://drs.crossbow.it/db/test/"
},
"headers": {
"host": "drs.crossbow.it",
"sec-fetch-site": "none",
"sec-fetch-mode": "navigate",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.1.1 Safari/605.1.15",
"accept-language": "en-GB,en;q=0.9",
"sec-fetch-dest": "document",
"accept-encoding": "gzip, deflate, br",
"via": "1.1 drs.crossbow.it",
"x-forwarded-for": "pi5.lan.crossbow.it",
"x-forwarded-host": "drs.crossbow.it",
"x-forwarded-server": "drs.crossbow.it",
"connection": "Keep-Alive"
}
}
}
This example server is running on a very busy raspberry Pi, its response can require up to 20 mSec.
On a different machine (a VM on a desktop Mac) time decreases, becomes as low as 4 mSec.
In this case if the operation script changes and must be reloaded, time remains always under 15 mSec.
This is a javascript object exposing interaction methods.
All interfaces are documented in https://drs.hbay.it/DrsOperation.d.ts
Import a script (js or mjs) or fetch a file (JSON or text) from the configuration tree, external to the operation .mjs file
Changes in configuration files are automatically detected, like it happens for operation .mjs files.
List a dataset, a schema or an entity.
In case of dataset or schema directory names are returned without the trailing "/".
The list is aware of transaction changes not committed yet.
Returns true if a memory cell file exists and has at least one key, false otherwise.
Memory cell coordinates are an Object with keys:
Read a memory cell JSON file returning its data, returns undefined if file doesn't exist.
Just like in drs.exists(), memory cell coordinates are an Object with keys:
Set a memory cell JSON file in data, deletes it if content is undefined
Delete a memory cell JSON file in data
Execute a MariaDB statement (with or without bind values) and returns result (if any).
Example:
drs.sql(
'INSERT INTO myTable (Name,Age) VALUES (?,?),(?,?)',
['Bob',6,'John',23]
);
Always prefer to bind params and whatever comes from input, for this prevents SQL injections.
Execute a convenient "INSERT / ON DUPLICATE KEY UPDATE" statement.
Automatic values binding, preventing SQL injections.
Can insert a single row or multiple rows, then returns an array of inserted/updated rows.
Example:
const foo = await drs.insert(
'myTable',
[ { Name: "Bob", Age: 6 }, { Name: "John", Age: 23 } ]
);
/*
Suppose there's an "id" column with AUTO_INCREMENT,
foo is: [{"id":234,"Name":"Bob","Age":6},{"id":235,"Name":"John","Age":23}]
*/
const bar = await drs.insert(
'myTable',
{ Name: "Jane", Age: 22 }
);
/*
bar is: [{"id":236,"Name":"Jane","Age":22}]
Returned value is always an array.
*/
Check if table exists, otherwise optionally create it using tableCreationSql SQL code (if provided).
Send an email.
Set the content of a static file under (some) dumpRoot location, deletes it if content is undefined
Some relevant considetations about dump operations:
How to express the path of a dump() operation:
"<path>""<alias>:<path>""Example:
Let's say we've this configuration: dumpRoot = { web: "/var/sites/mysite" }
Here there are various paths expression, and their acual resulting location:
"" => "/var/sites/mysite/""/" => "/var/sites/mysite/""index.html" => "/var/sites/mysite/index.html""/index.html" => "/var/sites/mysite/index.html""//index.html" => "/var/sites/mysite/index.html""web:" => "/var/sites/mysite/""web:index.html" => "/var/sites/mysite/index.html""web:/index.html" => "/var/sites/mysite/index.html"But…
".." => Error 403 (Forbidden)"./../index.html" => Error 403 (Forbidden)"web:/../foo" => Error 403 (Forbidden)"www:/index.html" => Error 500 (wrong alias)Before using any path, it's exploded into its acual final value.
Should the value fall outside allowed locations, this would result in a 403 Error.
List of files and directories on a static dump location.
Get the content of a static file under (some) dumpRoot location.
Commit changes to sql, data and file-system.
- Automatically triggered at the end of a successful operation.
- After commit, all data and static directories that became empty (because of file deletion) are automatically deleted.
- When in
dry_runmode drs.commit() is equivalent to drs.rollback() - all changes are reverted.
Revert every change to sql, data and file-system (that wasn't committed yet).
Automatically triggered at the end of an operation that run into an error, as well
as at the end of a successful operation with dry_run param.
Send (once) the output for request, while further send action(s) will be ignored.
drs.send() is automatically performed with the returned result of the operation.
On the other hand, especially in case of time-consuming async processes, or in case of processes
requiring a callback, it's possible to send quickly a response and then keep the operation running.
Please note that send()-ing an Error() doesn't stop the script, nor perform rollback.
If you want to stop the script and perform a drs.rollback() then throw an Error(), instead.
/*
Send an error and keep doing stuff, eventually commit() on return.
This returns a 500: {"status":500, "type": "Error", "error":"You're wrong, I'll do something!"}
*/
drs.send(new Error("You're wrong, I'll do something!"));
drs.send(new Error("You're wrong, I'll do something!"), true); // This will debug error on console/log
/*
Want to send an Exception, instead, stopping execution and performing a rollback?
Quite easy: throw an Error!
This returns a 500 as well: {"status":500, "type": "Error", "error":"You're wrong, I exit!"}
At the same time operation script exits with error, and rollback() is performed.
*/
throw new Error("You're wrong, I exit!");
/*
You can also customize exit status, prefixing it with a 4xx or 5xx status.
This returns a 403: {"status":403, "type": "Error", "error":"Forbidden"}, stops and rolls back.
*/
throw new Error("403 Forbidden");
An internal sub-call to another operation in the same dataset.
- Security matches original operation security.
- Authorization params (
auth_*) are preserveddry_runparam is preserved- In params
operationis mandatory, whileschemaandentitydefault to current operation values
The sub-operation is performed just like every other regular operation, except it doesn't pass through HTTP.
It's a promise returning return value, raising an exception in case of error.
Because it doesn't pass through HTTP, env in sub-operation context contains dataset key and no headers.
It's not HTTP, adding output extension to operation would run into an error.
In addition in sub-operations params' output isn't propagated and is completely ignored. Return is always a string (possibly JSON).
Be aware that a sub-operation is a completely separated task, with its own transactional status.
A detached sub-call to another operation in the same dataset.
Acts just like drs.sub(), but instead of promising a return value it keeps the call detached
and executed asynchronously even after the end of the current operation.
- Security matches original operation security.
- Authorization params (
auth_*) are preserveddry_runparam is preserved- In params
operationis mandatory, whileschemaandentitydefault to current operation values
Fetch an external resource using node-fetch
Import an external javascript
For details, see: npmjs,com website.
Used for development and debug: put one or more values on return console.
If used, and if a JSON object is returned, then a drs-debug property will be added to the return value, with all the debugged stuff in sequence.
Used for development and debug: send the content to dataset's log.
Dataset's log log.txt is visible in dataset's dataRoot or in the setup GUI.
It reports the output of operation logs as well as other dataset's errors.
Return a UUID (string)
Clone a structured object
A shelf where we can store values, objects, code and library and share them across the drs helper object lifetime.
A kind of session, but limited and isolated to the single request.
Usage:
null value removes object 'name' from the shelfEncode and decode Base64, just like btoa() and atob() in browsers.
Async function to hash-encrypt a cleartext password.
Async function to verify match of an encrypted value with a cleartext password.
Async function triggering the execution of an external command.
Promise object has the following keys:
Returns a zip Agent that can be used to return a zip file as response.
Example:
/** @type {import("../../DrsOperation").DrsOperation} */
const myOperation = async (context, drs) => {
const myZip = drs.zipOut();
let mySize = await myZip.add({schema: "foo", entity: "bar"});
mySize += await myZip.add({id: "index"});
return myZip.send("my-archive.zip");
}
export default myOperation;
Notes:
.add()schema and entity -if missing- default to current schema and entity.send() method returns a NonSharedBuffer, in addition sets the response headers necessary to download the ZIP archive..send() can be used only once.send() you can no longer add anything to the archive..send() you can no longer return JSON or XML, a ZIP archive (its buffer) is expected.Import an extension module (Object) with a dynamic import()
The default export of this module parses a string and returns the resulting window object.
Example:
const parseHTML = (await drs.extension('string2htmlDom')).default;
const myWindowObject = parseHTML('<html><body>Hello!</body></html>');
return( myWindowObject.document.body.innerHTML );
- To serialize (convert to string) a whole HTMLDom, use:
myWindowObject.serialize()- It's also possible of course to retrieve portions, e.g.:
myWindowObject.document.body.outerHTML
The default export of this module parses a string or XML DOM object and returns the resulting javascript object.
The default export is also exported as xml2js
The string2xml export parses a string and returns its XML DOM.
The default export of this module parses a javascript object and returns the resulting XML.
The default export is also exported as js2xml
The xml2string export parses an XML DOM returns its serialization.
Examples:
// By default conversion returns XML serialization string
const j2x = (await drs.extension('js2xml')).default;
const xmlString = j2x( context.params );
await drs.send( xmlString, "text/xml; charset=UTF-8");
return;
// It's also possible to retrieve the XML Document
const j2xModule = (await drs.extension('js2xml'));
const j2x = j2xModule.default;
const x2s = j2xModule.xml2string;
const xmlDocument = j2x( context.params, { getDocument: true } );
await drs.send( x2s(xmlDocument), "text/xml; charset=UTF-8");
return;
Te default js2xml export accepts an optional second options parameter, with keys:
rootName {string}, the name ot the output root node, defaults to "root"rootArrayName {string}, the name for top element (under root) if data is an array; defaults to "content"getDocument {boolean}, if true returns the XML Document, otherwise its string serializationThe two XML conversions (
xml2jsandjs2xml) work together.They preserve:
- number type, marked by "@number" attribute in XML
- boolean type, marked by "@boolean" attribute in XML
- Dates, marked by "@date" attribute in XML, converted to mSec in XML and back to Date in js
- Arrays, marked if necessary with "@array" or "@empty-array"
- Null values, marked with "@null" in XML and returned as null
- Strings, including empty strings
- Functions, in XML serialized and marked with "@function", functions again in JS
Example: op-echo-xml.mjs file implementing echo-xml operation
/** @type {import("../../DrsOperation").DrsOperation} */
const myOperation = async (ctx, drs) => {
const j2x = (await drs.extension('js2xml')).default;
drs.send( j2x( ctx), 'text/xml; charset=UTF-8' );
return( ctx );
}
export default myOperation;
Output can be tested here:
<root>
<params>
<dataset>test</dataset>
<schema>test</schema>
<entity>test</entity>
<operation>echo-xml</operation>
<auth_type boolean="true">false</auth_type>
<dry_run boolean="true">true</dry_run>
</params>
<data>
<foo>bar</foo>
<dry_run>1</dry_run>
</data>
<env>
<dataset>
<httpUri>http://localhost:3065/db/test/</httpUri>
<webUri>https://drs.crossbow.it/db/test/</webUri>
</dataset>
<headers>
<host>drs.crossbow.it</host>
<sec-fetch-site>same-origin</sec-fetch-site>
<accept-encoding>gzip, deflate, br</accept-encoding>
<sec-fetch-mode>navigate</sec-fetch-mode>
<accept>text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8</accept>
<user-agent>Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.1.1 Safari/605.1.15</user-agent>
<referer>https://drs.crossbow.it/</referer>
<sec-fetch-dest>document</sec-fetch-dest>
<accept-language>en-GB,en;q=0.9</accept-language>
<via>1.1 drs.crossbow.it</via>
<x-forwarded-for>pi5.lan.crossbow.it</x-forwarded-for>
<x-forwarded-host>drs.crossbow.it</x-forwarded-host>
<x-forwarded-server>drs.crossbow.it</x-forwarded-server>
<connection>Keep-Alive</connection>
</headers>
</env>
</root>
Returns the readXlsxFile object.
Exports:
readXlsxFile from readExcel (also exported as default)readSheetNames from readExcelexcel2JS, an async facility to parse an Excel with tabular data in one or more sheetsClient-side, uploading an Excel file:
const uploadExcel = () => {
window.document.body.querySelectorAll(':scope > input.autoInputField').forEach( x => {
x.remove();
})
const input = window.document.createElement('input');
input.name = 'file';
input.setAttribute('type', 'file');
input.setAttribute('accept', '.xlsx, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
input.style.display = 'none';
input.className = 'autoInputField';
window.document.body.appendChild(input);
return new Promise( ( resolve ) => {
input.addEventListener('change', async () => {
const selectedFile = input.files[0];
if ( selectedFile ) {
let formData = new FormData();
formData.append("file", selectedFile);
const reader = new window.FileReader();
reader.addEventListener("load", () => {
resolve({name: formData.get('file').name, content: reader.result});
})
reader.readAsDataURL(selectedFile);
} else {
resolve(false );
}
input.remove();
});
setTimeout( ()=>{ input.click(); } );
});
}
This function will return as "content" a data:application/vnd.openxmlformats-officedocument.spreadsheetml.sheet;base64,… URI, that we can upload via POST, e.g.:
uploadExcel().then( result => {
if (result) {
const data = result.content.split(',')[1]; // isolates oly the base64 serialization
myApiFunction('schemaName/entityName/operationName', { data: { name: result.name, excel: data } } );
}
});
Then in our operation file:
const readExcel = (await drs.extension('readExcel')).default;
const rows = await readExcel( Buffer.from(context.data.excel, 'base64') );
An alternative, using export excel2JS:
/** @type {import("../../DrsOperation").Excel2JS} */
const parser = (await drs.extension('readExcel')).excel2JS;
const content = await parser( Buffer.from(context.data.excel, 'base64') );
return {content: content };
Note:
excel2JSis of typeExcel2JS, which is fully documented inDrsOperation.d.ts
By using the appropriate JSDoc notation we'll be guided through the syntax of its optional second parameter (conversion options) and response type.
This will return a structure like:
{
"content": {
"Sheet1": [
{ "Name": "Mary", "Age": 25 },
{ "Name": "Henry", "Age": 45 },
{ "Name": "Joe", "Age": 89 }
],
"Disney-Characters": [
{ "English name": "Donald", "English surname": "Duck", "Italian name": "Paperino" },
{ "English name": "Mickey", "English surname": "Mouse", "Italian name": "Topolino" },
{ "English name": null, "English surname": null, "Italian name": "Topesio" }
]
}
}
This automatic conversion of an Excel workbook reports every valued worksheet assuming that it's a
table of data with column names in the first row.
Every column with an empty header (empty cell in the first row) is skipped.
Duplicated column names will report only the last column value.
Returns the writeXlsxFile object.
To add a custom extensions simply add a .js or .mjs file to the drs-extensions folder.
In this case import extensions with full name, e.g.: await drs.extension('my-extension.js')
If custom extension files are added or modified, changes become seamlessly effective; it's not necessary to restart the server.
As above mentioned some addition/changes are automatically detected and become seamlessly effective, without server restart.
This is true for:
Please note: this doesn't imply that they are always refetched and reparsed.
As soon as source file doesn't change, last parse/import remains valid and doesn't need to be reparsed.
This improves server efficiency.