We offer a system that makes unbiased and explainable decisions based on previously verified cases. The design is centered around human-interpretable knowledge, resulting in a system that is transparent to its core.
We believe that useful AI must be explainable. Our system gives exact explanations for its decisions. The explanation is detailed, easily understood by non-technical business end-users, and can be used to document the decision.
The system learns as it processes new cases, building a knowledge base. This knowledge base is human-readable and can be used to analyse the knowledge assets of an organisation. Knowledge becomes tangible and quantifiable.
Unlike machine learning, our system gives users control over the decision process. Users can decide exactly how the knowledge base is built and what knowledge is included in the decision making. Therefore, biased decision can be easily detected and avoided.