AI.

AUGMENTED
INTELLIGENCE
PLATFORM

Augmented Intelligence Platform

Everything you need to run your converged services and applications

No commitments. Scale as your grow.

First 5€ free to kickstart your project.

Sign up now
BigConnect

Core

starts at 4 EUR / month

Elastic and secure virtual cloud servers for all your hosting needs
with NVMe SSD Storage, 10GBps redundant connection, latest Intel and AMD CPUs

Extreme Performance

BigConnect's Cloud Compute Service (CCS) provides fast DDR4 memory, NVMe SSD local storage and the latest Intel and AMD CPUs to help you to power your cloud applications and achieve faster results with low latency.

Deploy CCS instances with just a few clicks from the easy-to-use console and scale capacity up or down based on real-time demands.

Super Fast Deployment

Spin up your choice of virtual machine in 10 seconds. Basic, General Purpose, CPU-Optimized, or Memory-Optimized configurations provide flexibility to build, test, and grow your app from startup to scale.


Reliable Operating Systems

Ubuntu

Fedora

Debian

CentOS

Advanced Networking

Let your servers communicate through a private network and setup complex network topologies. Ideal for running a kubernetes cluster or a database server that should not be reachable publicly.

Load Balancers

Load Balancers let you scale your applications easily by automatically distributing traffic across your infrastructure. Handle common use-cases like TLS termination or create an internet-facing entry point to route traffic into your BigConnect Cloud Networks.

Backups & Snapshots

With our snapshot feature, you can make manual backups of your servers. You can use our snapshots to restore your server to a saved image, use saved images to create new cloud servers, or transfer images during a project.

Backups are copies of your server that we make automatically to keep your data safe. You can store up to 7 of them.

Attachable Storage

Volumes offer highly available and reliable storage for your cloud servers. You can expand each Volume to up to 10 TB at any time and you can connect them to your BigConnect Cloud Compute instances.

Database

Database hosting with no worries

SImple setup and maintenance

Launch a database cluster with just a few clicks and then access it using the cloud console or your favorite database client. Never worry about maintenance operations or updates – we handle that for you.

Fast, reliable performance

Managed Databases run on enterprise-class hardware and SSD storage, giving you lightning-fast performance.

PREDICTABLE PRICING

Always know what you'll pay with monthly caps and flat pricing.

free backups

Your data is critical. That’s why we ensure it’s backed up automatically every day.  Restore data to any point within the previous seven days.

End-to-end security

Databases run in your account’s private network, which isolates communication at the account or team level. Requests via the public internet can still reach your database, but only if you whitelist specific inbound sources. Data is also encrypted in transit and at rest.

Launch Your DATABASE service

Cloud DNS

Complete domain management

Simple but Powerful

Host your domains on BigConnect Cloud and take advantage of the powerful DNS management features that go one step further to provide features like geo-targeting, load-balancing and more.

Complete control

​​Take complete control of your DNS. Visually manage your records like A, CNAME, NS, SRV, MX, CAA, TXT and more.

Dynamic DNS records

Dynamic DNS records. that enable behaviour based on requester IP address, requester’s EDNS Client Subnet, server availability or other factors. Capabilities range from very simple to highly advanced multi-pool geographically & weighed load balanced IP address selection.

​​​​FREE

Our Cloud DNS service is free to use for all our customers. Manage any number of domains and records at no charge.

Advanced Features

Take advantage of the special LUA record to configure your DNS entries to respond dynamically based on the location of the client, time or server health.

No coding

Visually manage your DNS records, no coding or advanced knowledge required.

Fast propagation

DNS record changes are propagated immediately accross the Internet to maximize business availability.

Launch Your DNS service

One-click Apps

Launch your favorite apps with just two clicks.

Ready to go in seconds

Save time with preconfigured environments that already have all the prerequisites installed – which removes the hassle of spinning up and provisioning servers.

The tools you need – in one place

​​We review everyday apps and cutting-edge technologies so you always have access to the most efficient tools to build your business.

​​​​Rich App catalog

Choose between Wordpress, Magento, Liferay, cPanel, OpenCart and many more to achieve your use-case fast.

Focus on what's important

Leave the infrastructure to us and focus on what's important for your business.

Scale instantly

Don't waste time preparing an expensive infrastructure. Scale your apps up or down with just a few clicks.

Launch Your Favorite APP

Cloud Intelligence Service

High-performance models for text and image intelligence

Cognitive models

A comprehensive family of cognitive APIs to help you build intelligent apps. Designed to bring AI within reach of every developer, without requiring machine-learning expertise. All it takes is an API call to embed the ability to see, search, understand and accelerate decision-making into your apps.

Text Analytics API

Discover insights in unstructured text using natural language processing (NLP). No machine learning expertise required.

Identify key phrases and entities such as people, places, and organizations to understand common topics and trends. Classify industry-specific terminology using domain-specific, pretrained models. Gain a deeper understanding of customer opinions with sentiment analysis.

Language Detection

Evaluate text input in a wide range of languages, variants, and dialects using the language detection feature.

Named Entity Recognition

Extract pre-built entities such as people, places, organizations, locations, date/time and numerals in documents using named entity recognition.

Summarization

Quickly identify the main points in textual data. Get a list of relevant phrases that best describe the subject of each document using key phrase extraction.

Sentiment Analysis

Detect positive and negative sentiment in social media, customer reviews, and other sources to get a pulse on your brand.

Similarity

Evaluate if documents have similar content, ignoring syntax and grammar. Group them together to reduce redundant information throughout the organization.

Classification

Categorize documents based on international taxonomies (eg. Political, Economic, Health etc.)

Topic Modeling

Discover common concepts that occur in a collection of documents.

Image Analytics API

Create products that more people can use by embedding artificial vision capabilities in your apps.

Use Image Analytics to label content with objects and concepts, extract text, generate image descriptions, detect faces, personal traits and emotions. No machine learning expertise is required.

Object detection

Detect and classify multiple objects including the location of each object within the image.

Advanced OCR

Extract text from images and documents with mixed languages and writing styles.

Facial Detection & Facial Recognition

Embed facial detection and recognition into your apps for a seamless and highly secured user experience. Additional features include age, gender and emotion detection.

Image Captioning

An AI service that will describe what's inside an image using a single sentence.

BigConnect AI Service

Answers

An easy way for everyone to ask questions and learn from data.

BigConnect Answers is a simple and powerful analytics tool which lets anyone learn and make decisions from their company’s data—no technical knowledge required.

Fresh out of the box, BigConnect Answers will show you a few things on the home page:

  • Some automatic explorations of your tables that you can look at and save as dashboard.
  • A navigation sidebar that lists:
  • Home button to return to your Answers home page.
  • Collections, where you’ll store all of your questions, dashboards, and models. You have your own personal collection to store drafts and experiments that aren’t yet ready to share.
  • Data section, which lists all of the data sources your Answers is connected to.
  • Settings (the gear icon at the bottom of the navigation sidebar).

You can also Bookmark your favorite items, and they’ll appear toward the top of the navigation sidebar.

To open and close the navigation sidebar, click on the logo in the upper left.

Questions

Questions are queries plus their visualization. You can ask questions using Answers’s graphical query builder, or create a native/SQL query.

Dashboards

Dashboards group questions and present them on a single page. You can think of dashboards as shareable reports that feature a set of related questions. You can set up subscriptions to dashboards via email or Slack to receive the exported results of the dashboard’s questions.

A dashboard comprises a set of cards arranged on a grid. These cards can be questions - such as tables, charts, or maps - or they can be text boxes.

You can add filter widgets to dashboards that filter data identically across multiple questions, and customize what happens when people click on a chart or a table.

You can make as many dashboards as you want. Go nuts.

Data modeling

Answers provides tools for organizing your data and making it easier for people to understand

Actions

Actions let you write parameterized SQL that can then be attached to buttons, clicks, or even added on the page as form elements.

Organization

Tools for finding things and keeping your Answers organized.

People

User accounts, groups, and authentication. For permissions, see Permissions overview.

Permissions

Answers has a simple, but powerful permissions system.

Embedding

You can embed Answers tables, charts, and dashboards—even Answers’s query builder—in your website or application.

Jupyter Notebooks

With Python, Scala and R kernels

Write code super-fast​​​​

Perform quick exploratory data science or build machine learning models using collaborative notebooks that support multiple languages and  built-in data visualizations.

An integrated and secure JupyterLab environment for data scientists and machine learning developers to experiment, develop, and deploy models into production. Pre-installed with the latest data science and machine learning frameworks

BigConnect CodeLab Service

​​​​Work together

Share notebooks and work with peers in multiple languages (R, Python, SQL and Scala) and libraries of choice.

FAst customizations

Write your own pieces of code to perform specific tasks not yet supported by BigConnect.

Data Science

Exploratory Data Analysis and Data Science using any available library you can think of.

Install your packages

Install your own packages inside the Jupyter Notebook.

Launch Your CodeLab service

Trino

A query engine to explore your data universe that runs at ludicrous speeds.

Unparalleled Performance

Trino removes data silos, opening doors to new insights. It's a distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.

What can it do ?

Trino allows querying data where it lives, including Hive, Cassandra, relational databases or even proprietary data stores. A single Trino query can combine data from multiple sources, allowing for analytics across your entire organization.

Trino is targeted at analysts who expect response times ranging from sub-second to minutes. Trino breaks the false choice between having fast analytics using an expensive commercial solution or using a slow "free" solution that requires excessive hardware.

BigConnect Trino Service

Speed

Trino is a highly parallel and distributed query engine, that is built from the ground up for efficient, low latency analytics.

Simplicity

Trino is an ANSI SQL compliant query engine, that works with BI tools such as R, Tableau, Power BI, Superset and many others.

In-place analysis

You can natively query data in Hadoop, S3, Cassandra, MySQL, and many others, without the need for complex, slow, and error-prone processes for copying the data.

Query federation

Access data from multiple systems within a single query. For example, join historic log data stored in an S3 object storage with customer data stored in a MySQL relational database.

Versatile

Supports diverse use cases: ad-hoc analytics at interactive speeds, massive multi-hour batch queries, and high volume apps that perform sub-second queries.

Launch Your Trino service

Data Pipelines

Build streaming data pipelines from any source to any destination

Data Pipelines is a lightweight, powerful design and execution engine that streams data in real time. Use Data Pipelines to route and process data in your data streams.

To define the flow of data, you design a pipeline in Data Pipelines Studio. A pipeline consists of stages that represent the origin and destination of the pipeline, and any additional processing that you want to perform. After you design the pipeline, you click Start and Data Pipelines goes to work.

Data Pipelines processes data when it arrives at the origin and waits quietly when not needed. You can view real-time statistics about your data, inspect data as it passes through the pipeline, or take a close look at a snapshot of data.

Data Pipelines Studio

Data Pipelines Studio is a lightweight, powerful design and execution engine that streams data in real time. Use Data Pipelines Studio  to route and process data in your data streams.

To define the flow of data, you design a pipeline in Data Pipelines Studio. A pipeline consists of stages that represent the origin and destination of the pipeline, and any additional processing that you want to perform. After you design the pipeline, you click Start and the Studio goes to work.

Pipeline Controller

The Pipelines Controller is a central point of control for all of your dataflow pipelines. It allows teams to build and execute large numbers of complex dataflows at scale.

Pipelines can be published to the Pipeline Controller service to run on multiple machines based on schedules and more. It provides services such as load-balancing, failover to achieve a high degree of availability and performance for executing your pipelines.

Teams of data engineers use the shared pipeline repository provided with the Data Pipelines service to collaboratively build pipelines using the Studio. Data Pipelines lets you deploy and execute dataflows at scale on multiple provisioned cloud instances.

The pipeline repository also includes pre-built pipelines for social network crawling and data workers for the BigConnect Platform.

BigGraph Data Workers

These are Data Workers implemented using Data Pipelines flows. Out-of-the-box, the following Data Workers are available:

  • DW_Age_Detector
  • DW_Category_Detection
  • DW_Emotion_Detector
  • DW_Entity_Extractor
  • DW_Face_Detector
  • DW_Face_Features_Detector
  • DW_Gender_Detector
  • DW_Image_Captioning
  • DW_Image_Metadata_Extractor
  • DW_Location_Resolver
  • DW_Object_Detector
  • DW_OCR_Detector
  • DW_Rabbit_consumer
  • DW_Sentiment_Analysis
  • DW_Speech_to_Text

Social Network Data

Data Pipeliens has data extractors for Social Networks as well. Data extraction can be performed using publicly available information only from the respective social network.

Discovery

A complete, fast, integrated Business Intelligence service that combines data visualization, exploration and transformation in a friendly and intuitive way.

Business Intelligence simplified

Connect and visualize your data in minutes. BigConnect Discovery is up to 100x faster than traditional solutions. From spreadsheets to databases to Hadoop to cloud services, explore any data.

Smart Dashboards

Combine multiple views of data to get richer insight. Drill-down in various dimensions to get a better understading of the root cause.

Fast Data Preparation

Prepare your data visually without programming skills or deep technical knowledge.

Ease of Use

Anyone can analyze data with intuitive drag & drop widgets. No programming, just insight.

Interactive dashboards

Aggregate content from sources and systems, create personalized views of data, and interact with data in a single, integrated experience.

Metadata Management

Enable easy access to disparate information in a quick, simple way. Create metadata catalogs to standardize data usage and dissemination.

Collaboration

Create workspaces and workbenches for your specific tasks. Invite others to help and see what you have done.

Multiple Data Sources

Load your data from Hive, Kafka, MySQL, Trino, PostgreSQL, local files, or your own custom system. Monitor data transactions, quries and status instantly.

Visual, Simple Data Transformation

Transform raw data to suit your purpose in a way that's easy and fun. Augment, enhance, heal and create rich data sets to improve insights and sharpen your understanding of the world you work in.

QUERY Monitoring

Monitor how the engine responds to your queries, see where the bottlenecks are and tune your work appropriately.

Explorer

The perfect service for making sense of unstructured data.

BigConnect provides an extensible platform to understand and work with any amounts of data, from any source and in any format. It’s an information-agnostic system where all data that flows into the system is transformed, mapped, enriched and then stored in a logical way using a semantic model of concepts, attributes and relationships. It provides an extensible, unified visual interface with tools for data discovery and analysis, collaboration, alerting and information management.

BigConnect Explorer is best suited for exploring and making sense of unstructured data. It can do a lot: data ingestion, data enrichment, discovery and analysis.

It is completely extensible and can be extended to suit any use case, using common development languages like Java, JavaScript, React, CSS/LESS etc.

Customizations are in the form of plugins that can be developed both for the back-end and the front-end.

Dynamic Data Model

The core foundation of BigConnect Explorer is the dynamic, semantic data model. It represents the way you store, correlate and query all information and it’s a conceptual data model that expresses information in a factual way: objects, relations and attributes.

Ingestion

BigConnect Explorer can ingest and process any kind of information: databases, office documents, text files, XML files, HTML files, images, audio and video files or streams.

Visual Interface

BigConnect provides an extensible integrated visual console where users can query and analyze the data. The console is collaborative, and all actions are taken inside a space. This means that all data that is added, modified, queried or analyzed can be contained to that space alone. Data changes that happen inside a space can be published to be available for everybody.

Enterprise Search

ElasticSearch is used behind the scenes to search through the data model and compute aggregations. This means that search results are delivered near-real-time and search queries can be as complex as you can imagine.

Dashboard

The Explorer  includes a dashboard where users are being taken upon login. Various widgets can be added or removed from the dashboard to bring to focus any relevant information.

Analysis Tools

Data analysis is performed using analysis tools. Out of the box, four tools are provided: Graph, Map, Timeline and Charts but one can create any kind of analysis product using the SDK.

Any number of analysis tool instances can be added inside a space to facilitate analysis from multiple perspectives.

Behaviors

Multiple saved searches (normal and Cypher queries) can be grouped together to define a behavior. A score is provided for each query and a threshold for the entire behavior. All objects that satisfy the queries and are over the threshold will be compliant with the defined behavior.

Security

Security is enforced at the UI level and at the data level. At the UI level, security is achieved by users, roles and privileges that control what users can do in the console, both at a global level and at the space level.

Extensibility

BigConnect Explorer is designed to be highly extensible and customizable at all levels using plugins, both for the backend and the front-end.

Spark

A unified analytics service for large-scale data processing

Managed Apache Spark service

A fully managed and highly scalable service for running Apache Spark at scale.

Fully managed deployment, logging, and monitoring let you focus on your data and analytics, not on your infrastructure.

Collaborative Notebooks

CodeLab integration to quickly access and explore data, find and share new insights, and build models collaboratively, with the languages and tools of you choice.

The best of open source with the best of BigConnect Cloud

The Spark service lets you take the open source tools, algorithms, and programming languages that you use today but makes it easy to apply them on cloud-scale datasets.
At the same time, the Spark service has out-of-the-box integration with the rest of the BigConnect Cloud analytics, database, and AI ecosystem.

BigConnect Spark Service

​​​Operational Data Analytics

Build a “just-in-time” data warehouse that eliminates the need to invest in costly ETL pipelines, revolutionizing the way data teams analyze their data sets.

Ease of use

Spark offers more than 80 high-level operators that make it easy to build parallel apps. You can use it interactively from Scala, Python, R, and SQL shells to write applications quickly.

Generality

Spark powers a stack of libraries, including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.

Launch Your Spark service

Object Store

Object storage for all needs and sizes. Store any amount of data. Retrieve it as often as you’d like.

High Performance

High-performance data storage service optimized for for data science and analytics workloads that can store, process, and access massive amounts of data at very hight speed, from anywhere in the world.

Full S3 compatibility

Use the large existing ecosystem of S3 tools, utilities, plugins, extensions, and libraries to manage your BigConnect Object Store.

CDN

Reduce the latency when loading frequently accessed content by up to 70%, improving overall site or app performance.

BigConnect Object Store Service

​​​Web Asset Delivery

Host and deliver static web or application assets such as images, JavaScript, and CSS. Enable the CDN to speed up your end user experience by up to 70%.

Media hosting

Redundant, scalable, and highly available infrastructure to host video, photo, or audio assets.

Software Delivery

Host images, containers, or software libraries that your customers can download.

Machine Learning Datasets

Store terrabytes of data and use it for model training and evaluation with the native object storage integration in other services.

Launch Your Object Store

TetraML

Create world-class Machine Learning models

Tetra is an in-memory, distributed, fast, and scalable machine learning and predictive analytics platform that allows you to build machine learning models on big data and provides easy productionalization of those models in an enterprise environment.

Inside Tetra, a Distributed Key/Value store is used to access and reference data, models, objects, etc., across all nodes and machines. The algorithms are implemented on top of H2O’s distributed Map/Reduce framework and utilize the Java Fork/Join framework for multi-threading. The data is read in parallel and is distributed across the cluster and stored in memory in a columnar format in a compressed way. H2O’s data parser has built-in intelligence to guess the schema of the incoming dataset and supports data ingest from multiple sources in various formats.

Algorithms

Algorithm available in Tetra ML.:

AutoML

In recent years, the demand for machine learning experts has outpaced the supply, despite the surge of people entering the field. To address this gap, there have been big strides in the development of user-friendly machine learning software that can be used by non-experts. The first steps toward simplifying machine learning involved developing simple, unified interfaces to a variety of machine learning algorithms (eg. TetraML).

Cox Proportional Hazards (CoxPH)

Cox proportional hazards models are the most widely used approach for modeling time to event data. As the name suggests, the hazard function, which computes the instantaneous rate of an event occurrence

Tetra Flow

Tetra Flow is a user interface for Tetra. It is a web-based interactive environment that allows you to combine code execution, text, mathematics, plots, and rich media in a single document.

With Tetra Flow, you can capture, rerun, annotate, present, and share your workflow. Tetra Flow allows you to use Tetra interactively to import files, build models, and iteratively improve them. Based on your models, you can make predictions and add rich text to create vignettes of your work - all within Flow’s browser-based environment.

Flow’s hybrid user interface seamlessly blends command-line computing with a modern graphical user interface. However, rather than displaying output as plain text, Flow provides a point-and-click user interface for every Tetra operation. It allows you to access any Tetra object in the form of well-organized tabular data.

Kafka

Managed Apache Kafka clusters

Fully managed Kafka clusters

BigConnect automates every part of setup, running and scaling of Apache Kafka. Just click a button and you'll have a fully managed Kafka cluster up and running in two minutes.

Management and Monitoring

Manage your cluster using a visual console:

  • Cluster state (topics, consumers, offsets, brokers, replica distribution, partition distribution)
  • Manage topicc
  • Reassign partitions
  • Run preferred replica election

BigConnect Kafka Service

​​​​System Decoupling

Kafka works well as a replacement for a more traditional message broker. Service systems are decoupled from each other and communicate through message queues.

Realtime Processing

Many modern systems require data to be processed as soon as it becomes available. Kafka can be useful since it is able to transmit data from producers to data handlers and then to data storages.

IoT

Kafka acts as a messaging tool that gathers massive volumes of data generated from various types of IoT devices. The data is then retrieved from Kafka for analysis and query.

Log Aggregation

Kafka can be used for logging or monitoring. The logs can be stored in a Kafka cluster, they can be aggregated or processed and then saved in a traditional log-storage solution.

Launch Your Kafka service

Sponge

Smart, scalable web data collection

Sponge helps you transform the limitless amount of unstructured content into valuable information that can be used everywhere. Whether is about web sites, blogs or forums, Sponge is a full-featured, flexible and extensible web crawler that runs on any platform and will help you crawl what you want, how you want.

Web sites are crawled with configurable HTTP spiders. It provides a very simple web UI where users can annotate any web page and define how to extract their data of interest.

Once data is extracted, it is fed to the processing pipeline where extracted data can be manipulated before using it in your own service or application.

Any website is a valuable source of information. The robots that can be designed in Sponge allow you to easily decode any raw website, by iteratively highlighting certain areas of the site and mapping them to the structure you want.  Using a user-friendly web interface, data mapping can be done in a couple of clicks.

Turn the web into data

Whether is about web sites, blogs or forums, Sponge is a full-featured, flexible and extensible visual web crawler that will help you gather information from web sites and social media.

Flexible configuration

Fine-tune the scraping process to get only the data you need. With a library of Taggers, Filters and Splitters you can achieve any web scraping use-case.

Visually choose what you need

Use the BigConnect Sponge Chrome web extension to visually annotate what you need to scrape from any website.

​​​High Performance

Using multiple crawling threads and distributed clusters, data can be quickly scraped from thousand of web sites.

Job Scheduling

Schedule crawling jobs to run at a specific moment in the future or at a regular interval to get the latest updates. Don't worry about duplicates, Sponge knows when a web page has been changed.

Taggers, Filters, Splitters

Add additional processing in the scraping workflow to address data cleaning issues, data type conversions, transformations and more.

Monitoring

See what's happening with your scraper, how much data collected and how much work it still needs to do.

Multiple Destinations

Scraped data can be stored in the Object Store or plain XML/ JSON files.

BigGraph

Massively scalable graph database service

Big Graph

Our proprietary massively scalable big data graph engine with features like enterprise security, enterprise search, dynamic ontologies, Cyper and SQL language support and much more.

Cypher Lab

​​Visual interface for querying the Graph Engine using the Cypher query language.​​​​

​​​Flexible storage

Choose between RocksDB and Accumulo as backend graph stores, depending on your use-case.

Async Data Processing

Push objects to message queues for additional processing of inserts, updates and deletes. Create data workers for processing queued messages to perform tasks like data enrichment, custom computations, external services etc.

Cypher Language support

Full support for the Cypher language for querying and altering graph data.

Enterprise Search

Index nodes and edges to quickly find them based on their property values. Use Trino for running SQL queries against graph objects.

Open Source

The graph engine is fully open-source and available on Github.

Elastic Compute Service

Elastic and secure virtual cloud servers for all your hosting needs
with NVMe SSD Storage, 10GBps redundant connection, latest Intel and AMD CPUs

starts at 4 EUR / month

Extreme Performance

BigConnect's Cloud Compute Service (CCS) provides fast DDR4 memory, NVMe SSD local storage and the latest Intel and AMD CPUs to help you to power your cloud applications and achieve faster results with low latency.

Deploy CCS instances with just a few clicks from the easy-to-use console and scale capacity up or down based on real-time demands.

Super Fast Deployment

Spin up your choice of virtual machine in 10 seconds. Basic, General Purpose, CPU-Optimized, or Memory-Optimized configurations provide flexibility to build, test, and grow your app from startup to scale.

Reliable Operating Systems

Ubuntu

Fedora

Debian

CentOS

Launch your first server

Advanced Networking

Let your servers communicate through a private network and setup complex network topologies. Ideal for running a kubernetes cluster or a database server that should not be reachable publicly.

Load Balancers

Load Balancers let you scale your applications easily by automatically distributing traffic across your infrastructure. Handle common use-cases like TLS termination or create an internet-facing entry point to route traffic into your BigConnect Cloud Networks.

Backups & Snapshots

With our snapshot feature, you can make manual backups of your servers. You can use our snapshots to restore your server to a saved image, use saved images to create new cloud servers, or transfer images during a project.

Backups are copies of your server that we make automatically to keep your data safe. You can store up to 7 of them.

Attachable Storage

Volumes offer highly available and reliable storage for your cloud servers. You can expand each Volume to up to 10 TB at any time and you can connect them to your BigConnect Cloud Compute instances.

Virtual Machine

Massively scalable graph database service

BigConect VM hides the complexity of managing containers behind an easy-to-use UI. By removing the need to use the CLI, write YAML or understand manifests, BigConect VM makes deploying apps and troubleshooting problems so easy that anyone can do it.

The Agent

​​Visual interface for querying the Graph Engine using the Cypher query language.​​​​

Security and compliance

BigConect VM runs exclusively on your servers, within your network, behind your own firewalls. As a result, we do not currently hold any SOC or PCI/DSS compliance because we do not host any of your infrastructure. You can even run BigConect VM completely disconnected (air-gapped) without any impact on functionality.

Dashboard

The Docker/Swarm dashboard summarizes your Docker Standalone or Docker Swarm environment and shows the components that make up the environment.

Application Templates

An app template lets you deploy a container (or a stack of containers) to an environment with a set of predetermined configuration values while still allowing you to customize the configuration (for example, environment variables).

Stacks

A stack is a collection of services, usually related to one application or usage. For example, a WordPress stack definition may include a web server container (such as nginx) and a database container (such as MySQL).

Services

A service consists of an image definition and container configuration as well as instructions on how those containers will be deployed across a Swarm cluster.

Containers

Put simply, a container is a runnable instance of an image. Containers do not hold any persistent data and therefore can be destroyed and recreated as needed.

Images

Images are what is used to build containers. Each image defines the pieces required to build and configure a container and can be reused many times. The Images section in Portainer lets you interact with the images in an environment

Networks

BigConnect VM lets you add, remove and manage virtual networks in your environment.

Volumes

A volume is a data storage area that can be mounted into a container to provide persistent storage. Unlike bind mounts, volumes are independent of the underlying OS and are fully managed by the Docker Engine.

Configs

Docker 17.06 introduced swarm service configs which allow you to store non-sensitive information, such as configuration files, outside a service’s image or running containers. This allows you to keep your images as generic as possible, without the need to bind-mount configuration files into the containers or use environment variables. Secrets is another option for storing sensitive information.

Secrets

A secret is a blob of data, such as a password, SSH private key, SSL certificate, or another piece of data that should not be transmitted over a network, stored unencrypted in a Dockerfile, or stored in your application’s source code. In Docker 1.13 and later, you can use Docker Secrets to centrally manage this data and securely transmit it only to those containers that need access to it.

Cluster

This section provides an overview of your Swarm environment. You can view information about environments and nodes, view cluster information, and configure environment-specific settings.

Users

Give access to individual users then manage them as their needs change over time.

Cloud

Massively scalable graph database service

Big Graph

Our proprietary massively scalable big data graph engine with features like enterprise security, enterprise search, dynamic ontologies, Cyper and SQL language support and much more.

Cypher Lab

​​Visual interface for querying the Graph Engine using the Cypher query language.​​​​

​​​Flexible storage

Choose between RocksDB and Accumulo as backend graph stores, depending on your use-case.

Async Data Processing

Push objects to message queues for additional processing of inserts, updates and deletes. Create data workers for processing queued messages to perform tasks like data enrichment, custom computations, external services etc.

Cypher Language support

Full support for the Cypher language for querying and altering graph data.

Enterprise Search

Index nodes and edges to quickly find them based on their property values. Use Trino for running SQL queries against graph objects.

Open Source

The graph engine is fully open-source and available on Github.

Launch Your BigGraph service

bigconnect
concepts

We Realtime Big Data Platform designed for unstructured data
Collect, enrich, correlate and analyze huge amounts of information into actionable knowledge
Enables a new breed of analytics and applications on all data: structured and unstructured
Manages the complete data lifecycle, from collection to visualization to AI/ML
Self-service, low-code
Full API/SDK support for application developers
Complements existing solutions with unstructured data capabilitie

SIGN UP

Create your first Cloud Service

Sign up now