Lately, I almost got worried. Whenever I looked around our office in Berlin I saw this strange combination of exhaustion and focus that so often precedes a major milestone in development.

Historically there has been a clear separation between backend messaging (Kafka, AMQP etc.) and frontend streaming (socket.io etc.) - as well as between messaging and storage. Keeping these in sync was an eternal development vortex, sucking up dev-time and application performance alike.

In 2016, deepstream stepped up to bridge this gap by creating a realtime datastore and messaging layer that could be accessed by backend services and frontend clients alike. Consequently it gained significant traction, not only within the developer community, but especially within companies as large as Ticketmaster, BNP Paribas or T-Mobile with demanding requirements in terms of scale, throughput and integrations that simply weren’t met by existing realtime technologies.

But with a platform as ubiquitous as deepstream there were inevitably shortcomings. Handling complex relational data structures is one of them, support for industry specific standards (e.g. MQTT for IoT or Fix/Fast for finance) is another.

With deepstream 3.0 we are confident that we have laid the groundwork to overcome these challenges. We have grown into a universal realtime platform that can be extended with message types and schemas and can unify communication between endpoints as diverse as smartphones, browsers and server processes, but also fridges, network routers, in-car message buses and sensor arrays.

deepstream 3.0 introduces a generic message endpoint system, allowing for new messaging mechanisms to be added and removed as needed. And it comes with the first one integrated from the start: a powerful HTTP API that allows any programming language to create, read, update and delete records, send events and make remote procedure calls. This makes it possible to update records from AWS Lambda functions, issue AJAX requests without the need for a deepstream client connection, bulk import data from a database or contact the server directly from within a webhook.

The HTTP API also enables support for the deepstream PHP client, opening the platform up to a whole new range of use-cases and developers.

But there is much more on the horizon. Next on the agenda: GraphQL support.

rec = ds.record.getRecord(`graphql:{
  person(personID: 4) {
    name,
    birthYear,
    homeworld {
      name
    },
    filmConnection {
      films {
        title
      }
    }
  }
}`)


rec.set( 'homeworld.name', 'Alderaan' );

We feel that GraphQL is an amazing standard that’s perfectly suited to make querying deepstream and working with relational data easier. Seeing our development efforts so far and the power of deepstream as a distributed platform we’re confident to soon release what will be the fastest and most scalable GraphQL server to date.

Going forward deepstream has become a rich ecosystem. On top of the open source version we’ve launched deepstreamHub as a platform and a clusterable on premise version with extended scale and features such as monitoring, SSO integration and AI based cluster management for enterprises.

This means we’ll introduce a few structural changes: as of deepstream 3.0 we’ll discontinue the current message connector API and replace it with a built in, high performance p2p/small world cluster messaging system, available exclusively as an enterprise plugin.

Over the next few days we’ll also move the content of deepstream.io to deepstreamhub which will become the new home for everything deepstream.

We’re very excited about the immediate future and want to say thank you to our amazing user-base and community for their patient support and spectacular development efforts.

... You know who you are. Stay awesome!