(To find a keyword or term use Ctrl+F)
Quantum ML is an automated machine learning platform that uses global data to forecast financial instruments like stocks, commodities and cryptoassets.
The goal of Quantum ML is to streamline and accelerate the process of evaluating data, building machine learning models, and integrating the combined signals and insights for more profitable decision-making.
Quantum ML can forecast any target variable, and is therefore applicable to any of the following users:
(1) Enterprises and Corporations(2) Investors(3) Traders(4) Buyers(5) Procurement Officers(6) Risk Managers(7) Supply-chain Managers
Each of these groups is interested in the same fundamental process of forecasting data, however seen through a slightly different lens as each user type would evaluate data and model performance based off different metrics.
All models are trained on at least five years of data, with 20% (1 year) of data held back for ‘backtesting’. The holdback allows us to objectively evaluate model signal performance on known history, but as if the model doesn’t know what’s going to happen. Model signal performance is then evaluated across standard industry metrics like information ratio, sharpe ratio, directional accuracy, return on investment, and cost savings.
Trial access is offered in very rare occasions, but you can contact our team at firstname.lastname@example.org to find out more.
Cost varies considerably based on user requirements. To get a quote please contact a member of our sales team at email@example.com.
Please contact us at firstname.lastname@example.org.
Superforecasting recognizes the power of probabilistic modelling when applied to uncertain events. It draws on the idea that there’s one thing better than the wisdom of the crowd, and that’s the wisdom of a wise crowd. Superforecasting is a foundational concept that inspires the methodology and technology of Quantum ML.
Perpetual Beta is a concept that acknowledges and reinforces the benefit of constant re-evaluation, updating, and self-improvement. Perpetual Beta is also a foundational concept that inspires the methodology and technology of Quantum ML.
A concept that illustrates that aggregation of information in groups results in decisions that are often better than could have been made by any single member of the group.
Hedgehogs refer to the experts who organize their thinking around Big Ideas and tend to squeeze complex problems into preferred cause-effect templates. They display poorer superforecasting skills as compared to Foxes.
Foxes refer to the experts who are more pragmatic as compared to Hedgehogs. These “generalists” gather as much information from as many sources as possible and talk about possibilities and probabilities, not certainties.They readily admit when they were wrong, have a dynamic and flexible mindset. Foxes are better at forecasting than Hedgehogs.
The process behind Quantum ML isn’t focused on one “Big Idea” (hedgehogs), but explores thousands of permutations and combinations of data, algorithm selection and parameter possibilities (foxes), and then aggregates the individual model outputs into a challenger/champion framework (wisdom of crowds). Each hypothesis is then revisited periodically to adapt to changing conditions and inputs, where winners are reselected based on the new results, and the process repeats ad infinitum (perpetual beta). The more compute resources the user can allocate to this process, the better the results. This facilitates a dynamic process of continuous improvement via exploration, assessment and adaptation.
Rapid Prototyping is a modeling approach that leverages our advanced solutions architecture, Automated Machine Learning. Thus enabling users to build models in minutes, without any prerequisite data science expertise.
Rapid Prototyping facilitates the concept and process of perpetual beta.
Automated Machine Learning is a process by which components of the machine learning workflow are automated to streamilne and accelerate the modeling process and improve results.
With AutoML, data ingestion, data preparation, model building, daily model updates, as well as model parameter selection and optimization, are all automated and facilitated through a user-friendly interface; no data science or coding expertise required.
Autoseries is an specialized machine learning function that automates and optimizes data, algorithm and parameter exploration via an automated challenger-champion framework. The more compute resources allocated to this process, the better the results.
A view of individual model results, including Driver Analysis, Top Drivers and Performance Metrics.
The Group Dashboard is a visualization and model signal aggregation tool built upon the concepts of ‘superforecasting’ and ‘wisdom of crowds’. With it users can aggregate signals from many models, each with varied hypothesis and perspectives. Individual model outputs are ensembled and stacked based on dynamic performance criteria to provide improved signal accuracy and performance.
Driver Analysis summarizes the feature importance of the predictors (data inputs) used in a model. The higher the feature importance, the more valuable the data.
The benefit of this feature is that it tells us what data is valuable. Analysis is provided on an individual predictor level as well as at a provider, category and subcategory level. For example, users can easily assess the quality of newly acquired data, either at an individual predictor level, category level or from the data provider as a whole.
Automated Machine Learning is a technology that streamlines and accelerates the processes and workflow typically required to utilize machine learning. This involves the automation of data pre-processing, model creation, parameter adaptation, and model updates. Typically this process requires a technical team and takes weeks or even months to implement.
Our Group Dashboard combines model outputs via user-defined or automated success criteria (filters) that leverages the wisdom of multiple models produced via rapid prototyping and auto-series.
Auto-series is a comprehensive framework and process for building accurate and robust machine learning models that constantly learn and adapt to changing conditions.