The command-line interface is completely critical to the operations and compliance, controls, and future-security concepts related to the Stack Standard API and toolbelt called (gsys). The language used to transform stacks is a meta-language based on GraphQL with a custom schema. The schema itself describes requirements, rationale, behaviors, conditions, actions, and events. All of these are relatively loosely coupled, but do define foreign key relationships and GQL dependencies in order that searching them and working through the database manually to find an answer is self-prescriptive and intuitive.

dscs

The API is versioned, and each new version will not create breaking changes on the last. It is imperative that all new versions spawn completely fresh, and if they were to use older versions of Node or remote schemas using old versions of itself... it would not conflict on Typing or field names, as an example. This includes if we decide to schema—stitch.

The clustser backend is build on Postsgres running Hasura stacks and the GraphQL engine, GraphiQL, Voyager, and is ready for Prisma introspection at hyperscale using TimescaleDB via a timeseries extension! I assure you the Timescale extension has more than just streaming pub/sub subscription value. (with this extension in my own tests it makes Postgres approx 6000x faster than MongoDB under the same load bearing conditions. Why do all these people want to use Mongo anyway?

DB Architecture

Here's the architecture of the database:

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/fae9aeaf-8337-4605-a257-8cfa90b1d601/timescaledb-single-backups.png

Architecture of a typical stack

As you can see, Hasura and any independent GraphQL serverss reside in the Kubernetes stack along with clustered Postgres. This is all currently residing on the FSE Amazon AWS Clusters dev-1, dev-2, and prod-1 in the US-East-1b Region Ohio. All Rancher Server nodes are in US-East-1a Virginia. All database backups are sent to S3 backups, and the EKS master nodes control the node-group for each cluster in that Auto-scaling will occur under pressure, and scale back under less duress.

At this point it is not difficult to move the entire cloud to Anthos or Azure, we just need to use Kubectl to do it, and Helm to spawn it. Backups would need to be restored from the persistent volumes. Etcd would also need to be restored. Other than that, Rancher is getting all of it's state from Kube API and Etcd, so no big deal if Rancher goes down either. Backups are generated automatically for maintenance of Etcd State. Master nodes are managed by EKS.

Worst case scenario, 2 Multi-AZ regional data centers get knocked offline or Amazon pulls the plug like they did to Parcel, we'll have hot-standby DB read replicas for mission critical ops on different cloud providers, such as Heroku, Digital Ocean, Host Gator, etc. That's actually the main selling point of Rancher is that they are hybrid cloud aware.

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/b3e5a7ae-1bd4-40c2-8ccf-dc7b5f56f1cf/EKS_Chart_Helm__AWS.png

Overall Architecture and using the language: