Learn the basics of using the BigConnect data model: entities, relationships and properties and how to map your data to the model.
January 15, 2019
The power of BigConnect lies in its ability to correlate data using a specific data model (or ontology). The ontology is dynamic. It can be altered at runtime by adding more concepts, relations and properties to suit your needs.
The ontology consists of concepts, relationships and properties that can be assigned to both concepts and relationships. A relationship connects two or more concepts together. It needs to have source and target concepts. The concepts are also hierarchical, meaning that properties are inherited from their parent concept. At the root of the concept hierarchy we have the Thing concept that has a set of system-defined properties which cannot be deleted. These properties are used throughout the system for various tasks.
BigConnect has a Public ontology, available to everybody, and can also have different ontologies for each workspace.
Any object that first arrives in the system must either be an entity or a relationship. Entities are instances of concepts and relationships are instances of their ontology relationships. It’s up to you to assign a proper concept for a new object, otherwise it will be created as a Thing.
Concepts, Relationships and Properties support a bunch of meta-properties that tells BigConnect how to treat them. For example a Concept might have an icon, a color, information about whether it can be updated, deleted, custom scripts etc. You will find all this information in the User Guide.
Loading and mapping data to the ontology is a trivial task. We currently support structured, unstructured and semi-structured data formats, such as:
Each format is handled in a specific way. Unstructured information is usually imported as is, without any special mapping except for the mime type. A document will be imported as a Document entity, an image will be imported as an Image entity and so on.
Structured and semi-structured data is imported through a visually assisted process that will guide you to choose the data that you would like to map and how you would like to map it.
Data can be imported and mapped also using the API, Apache Nifi, Spark, SQL inserts, Cypher queries and Pentaho Data Integration. We have how-to guides for each tool or technology you would like to use.