Quantum Leap Harnessing the power of AI at scale
Table of Contents
2
Introduction
3
From inspection to automation: The role of machine vision in modern manufacturing
7
How knowledge graphs are the answer to better decisions for your business
10
Data drift: Identifying. Preventing. Automating.
13
Quantum computing is here. Is your business ready?
In the rapidly evolving landscape of today's world, the pervasive influence of Artificial Intelligence (AI) is transforming our daily lives and professional endeavors. Spanning across various sectors, AI is revolutionizing operations and facilitating profound changes. This compendium delves into the realm of AI and its transformative potential, exploring its practical applications and the emergence of new business prospects.
Beginning with "From Inspection to Automation: The role of machine vision in modern manufacturing," Prosenjit Banerjee, a distinguished Principal Data Scientist, delves into the integral role played by machine vision in contemporary manufacturing. This article elucidates how this advanced technology automates inspection processes, enhances efficiency, and reduces costs, propelling the manufacturing industry forward. Continuing the journey, "How knowledge graphs are the answer to better decisions for your business" unveils the significance of knowledge graphs in optimizing business decision-making. Authored by Senior Data Scientist Sanjeev Kumar, this piece showcases real-world applications across finance, healthcare, and e-commerce. By adeptly connect - ing and representing intricate data from diverse sources, knowledge graphs empower organizations with enhanced insights and predictive capabilities. Snehotosh Banerjee, Principal Architect of AI@Scale, delves into the concept of data drift and its prevention in "Data drift: Identifying. Preventing. Automating." Offering
valuable insights, this article highlights the deleterious effects of data drift on model performance and accuracy. It further outlines proactive measures to identify and mitigate data drift, ensuring the reliability and efficacy of machine learning models. Concluding this compendium, Prateek Jain, Lead Architect of AI@Scale, and Srinjoy Ganguly, esteemed Senior Data Scientist of AI@Scale, venture into the nascent domain of quantum computing. "Quantum computing is here. Is your business ready?" elucidates the unparalleled advantages of quantum computing over classical computing in tackling intricate problems with unprecedented speed. Shedding light on the present state of quantum computing technology, this piece explores its potential applications across diverse industries. By assimilating these invaluable insights, readers gain a comprehensive understanding of the intersection between AI and business. Harnessing the potential of AI at scale enables organizations to fuel growth, foster innovation, and forge a path towards a prosperous future.
2
© 2023 Fractal Analytics Inc. All rights reserved
From inspection to automation: The role of machine vision in modern manufacturing
Author: Prosenjit Banerjee | Principal Data Scientist | Machine Vision and Conversational AI
Machine vision is revolutionizing the way companies manufacture. It transforms supply chain and logistic systems into smart cyber-physical systems that can extract vital information remotely through images and videos. Until recently, the major applications of machine vision in manufacturing were on condition or process monitoring systems. In the early days, the motive was to reduce dependency on human operators by installing a camera strategically in a manufacturing unit to capture continuous events or sequences as images/videos. These unstructured data were processed with state-of-the-art machine vision algorithms to infer the criticality of events in real time and augment knowledge to an operator in his decision-making process. This led to the development of automated optical inspection (AOI) systems which could detect defects of manufactured units in a production line non-intrusively, removing the faulty units to land in delivery. The introduction of ethernet server-based systems enabled vision sensors to stream / store, process, and infer from large volumes of data. It was now possible to interconnect multiple vision systems and make quicker inferences, enabling faster decision-making. Suddenly, the production lines became faster, with informed or scheduled downtimes and significantly fewer manufacturing errors. Simultaneous advancements in camera / vision sensor technology and machine vision algorithms scaled the adoption of vision sensors towards other related events in manufacturing like situational awareness in factory floors, regularize human intervention in critical or hazardous areas, maintaining safety standards in production like adherence to helmets, gloves, PPE kits, detect an early outbreak of fire, prevent accidents and better asset management. The growth and adoption of cloud technology and IoT fostered investment by the manufacturing industry toward digital platforms. With the history of production knowledge and decision-making, replicating or repurposing manufacturing systems became the need of the hour.
The concept of digital twins was introduced as an analytical environment for closed-loop decision-making for any known entity in a value chain, thereby connecting decisions from the strategic to the operational level. Machine vision has supported the realization of digital twins, from creating virtual environments for production to the virtual commis- sioning of machines. Machine vision generates compelling business value among early adopters. However, as Gartner states, the highest-value machine vision solutions are hard to adopt, replicate and scale. McKinsey pegs the number at 72%, where organizations have tried to adopt yet met with less success. How do we then harness the power of machine vision applications? How do we architect adaptable and scalable machine vision solutions? Machine vision has always delivered significant value within uniform parameters or working conditions. The difficulty in scaling machine vision solutions in some sectors may be attributed to the lack/absence of standardization in their processes. This may require additional investments where legacy systems may have to be upgraded to meet required standards. Other roadblocks towards machine vision adoption are witnessed in data privacy, where data about critical processes were absent in model building.
Exceptional progress in vision feature engineering coupled with steep rise in deep learning and camera sensor technology, have opened a plethora of opportunities across the manufacturing and supply chain, which earlier had existed only in theory.
3
© 2023 Fractal Analytics Inc. All rights reserved
Machine vision enhancing manufacturing efficiency Machine-vision-powered manufacturing and supply chain solutions have been driving improved production efficiency across different industrial sectors. In recent years, machine vision and cloud computing have enabled the usage of Augmented Reality / Virtual Reality.
Bulk manufacturing, customization & packaging T he advent of different kinds of algorithms in the computer vision domain, like the generative models or the GAN networks, have made it possible to make several variants of a base model with not much effort, thus scaling up production with ease. Machine vision has also made customization possible, where a product can be tweaked to cater to a specific category, thus bringing down the turn-around time significantly. Machine vision solutions help identify products accurately in terms of type, size, and basis, which sorting and packaging are done effectively. Integration with robotics Two domains have grown with the growth of machine vision - sensor technology and algorithm, which provide robots with sight and intelligence. The sensor makes sight possible, where 3D images enable robots to understand the distance between two points. At the same time, intelligence allows robots to know what is happening between two points and mitigate any risks. With this, basic production tasks like sorting machine parts or putting parts in designated baskets, which may take humans a lot of time, are efficiently executed by robots in a fraction of the actual time taken.
Virtual commissioning of machines through simulation assists in rigorous testing of production parameters, without actual usage of machines. This has resulted in reducing the cost of procuring expensive machines and cutting down costs by huge margins.
Production line quality check With high-end sensor technology, a machine vision system provides deep analytics and insights about a production line. It is enabled remotely with minimum human intervention quality checks that spot even the tiniest defects and the root cause behind them. Accuracy and the ability to execute from remote locations gives an edge to machine vision solutions. Replicating production process With changing geopolitical scenario, a typical production unit may cater to different geographies, thus demanding an understanding of local production requirements. Machine vision enables replicating a current production line to incorporate new ones by extracting information and knowledge from existing production lines - like ambient temperature required or the common defects in a production process. A model is formed, which is then implemented and trained at the new plant.
4
© 2023 Fractal Analytics Inc. All rights reserved
Addressing the top 3 challenges Manufacturers still have apprehensions when it comes to adopting machine vision solutions. However, consistent advancement in this domain has been addressing their concerns.
Halting production
CHALLENGE
To ensure seamless transformation of production lines, where defects or concerns are addressed without halting production or impacting any loss in product volumes.
SOLUTION
State-of-the-art depth cameras enable quick scanning of machine parts, reverse engineering, editing changes, printing fresh designs, and putting processes back to production in almost no time. This ‘non-intrusiveness’ of machine vision technology enables its adoption into existing processes without halting or causing significant changes in the production line.
Scaling production
CHALLENGE
There is an urgency to replicate or scale up successful machine vision solutions. However, the transition may not be smooth as requirements and models may vary.
SOLUTION
Undoubtedly, the value machine vision technology can deliver to manufacturing industries in almost all its value chains. Vision cameras need to get cheaper for their widespread adoption. Data has to be made available for continuous machine vision model training to address data drifts, adhering to all data access standards and privacy concerns. Simultaneously, as the hunger for expedited manufacturing grows, there is also a dire need to be open in standardizing manufacturing processes and aligning on targeted delivery. This may require investment in the form of upgrading toward uniform production lines.
Lack of data
CHALLENGE
Lots of data flows into manufacturing units but may not be retained or labeled. Where success highly depends on the quality of data available, the absence of labeled data or no data is a major hiccup while delivering solutions to the manufacturing or supply chain industry.
SOLUTION
Machine learning, a part of machine vision, now develops models that work on less data footprint. Where thousands of images were earlier required to train models, now with semi-supervised learning methods, models can be fine-tuned with very less or partially labeled data to generate effective solutions.
5
© 2023 Fractal Analytics Inc. All rights reserved
Fractal initiatives in different sectors
DEFENCE Developed threat-detection engine by engaging high-precision image and Video Analytics platform that used cutting-edge deep learning techniques to process challenging video data. Launched at DefExpo 2022, the IVA-HWKI is more accurate than a human operator, and robust across a range of environment. RE-INSURANCE Using machine vision, extracted intelligence from temporal aerial imagery and provided remote assistance to one of the largest property re-insurers to settle claims.
OIL & NATURAL GAS Automated defect detection process from 3D data.
INSURANCE Through document digitization, helped insurance firms in Smart Verification of different format of documents essential to their internal processes. MANUFACTURING Supported a multinational manufacturer in automating multiple production processes and carrying out remote warehouse monitoring. Helped a manufacturing giant in defect monitoring of products in production lines.
How Fractal tackles data privacy issues
Data privacy is a big concern. The fact that the data can be reused and models repurposed is a cause of constant worry among stakeholders.
Fractal has a dedicated machine vision team focusing on efficient solution deployment. In most cases, the hardware rests with third parties. But once the data flows in, the team builds models with limited annotated data to provide state-of-art solutions. A large part of the solution is then deployed on the client’s device, followed by regular updation of the model. More and more of the solutions are being moved to the Edge to strengthen privacy further. With this technology, data can be processed closer to where it is generated. Organizations prefer solutions where a major part of data resides on their devices. Fractal has been working in federated learning for some time now, and our solutions always have an aspect of privacy preservation factored in.
Fractal Edge Fractal has a team of self-driven and
Way forward Intensifying research, advanced machine learning engineering, and progress in deep learning are fuelling the growth of machine vision. Technology is still expensive, but personalized and customized solutions can soon be a norm, enabling smaller manufacturers to afford them. The future will see manufacturers accelerate digital transformation through machine vision solutions and achieve new accuracy, efficiency, and quality levels.
enthusiastic professionals from premium Indian institutes. While freshers bring in their enthusiasm, experienced professionals bring along their domain knowledge. The machine vision team has been built over the years. Today they carry the capability of in-depth research, putting the research into production, and delivering effective solutions.
6
© 2023 Fractal Analytics Inc. All rights reserved
How knowledge graphs are the answer to better decisions for your business
Author: Sanjeev Kumar | Senior Data Scientist | AI @scale, Machine vision & conversational AI
Knowledge graphs are increasingly powering AI applications today. Yet, for scalable implementations and solving enterprise data integration challenges, leaders must take an agile approach to knowledge graph development. According to Gartner 1 , by 2024, enterprises using knowledge graphs and semantic approaches will have 75% less AI technical debt than those that don’t. Knowledge graphs can solve multiple data integration challenges that are still barriers to AI adoption related to data complexity, quality, and accessibility. With knowledge graphs, there is a fluid data environment using uniform identifiers, flexible schemas, and triples instead of tables. However, there are varied perspectives and interpretations of what a knowledge graph constitutes. Data management focuses on the creation, use, and representation of metadata. Knowledge engineers see knowledge graphs as a concept of domain understanding, and business users look towards the hidden insights that can be surfaced by utilizing specific links and data relationships.
Taking advantage of the potential collective intelligence, knowledge graphs are being used by:
Pharma & Life Sciences: For drug discovery Financial Services: To detect frauds and better investment decisions Manufacturing & Electronics: To ward off risks and improve investment analytics Manufacturing: To optimize production lines and supply chains Search & Chatbots: For recommendations and question answering Knowledge graphs give businesses a bird’s eye-view of the entire data, allowing them to capture interesting insights and establish relationships between entities involved, which would have otherwise been difficult to envision before.
Let’s find out how knowledge graphs are the answer to better decisions for your business.
Using knowledge graphs
A knowledge graph is a semantic network of three components — nodes, edges, and labels. Nodes are represented by anything — people, places, objects. The nodes are connected by edges which signify a relationship between the nodes. We assign a label to the edge which defines the meaning of relationships. These labels are part of the ontology that drives a knowledge graph’s schema. Businesses use knowledge graphs to link and integrate data (most of which exist in silos) and form multiple interconnections. The data from various sources — complex or simple, structured or unstructured — get organized.
1 https://www.gartner.com/en/documents/3985680
7
© 2023 Fractal Analytics Inc. All rights reserved
How Fractal integrates knowledge graphs for better business decisions While Fractal has used knowledge graphs to solve client use cases in the past, it has also supported enterprises with its knowledge graph initiatives. The organization employs knowledge graphs built by assimilating knowledge from multiple heterogeneous sources and condensing it into a knowledge graph. Fractal helped the organization build the infrastructure for data engineering, which hands over the transformed data to NLP engines for extracting the triples (building blocks of a knowledge graph).
INPUTS
OUTPUTS
SOURCES
GRAPHS(NODES & EDGES)
AZ Internal Data like omics, literature, chemistry
Databases
Other open source data like semantic scholar
Property Graph
DATA TYPES
SEARCH & REPORTING
NLP + GRAPHS
Fig 1: Fractal knowledge graph 2 initiative for life sciences company
How Fractal is ahead of the curve in client servicing through knowledge graph solutions Cost-effective fraud detection for a top British Commercial insurance firm We detected and identified multiple frauds involving solicitors, doctors, repairers, and claimants, who were then handed over to the Special Investigative Unit. This led to 40% savings on SIU’s efforts to manually go through every claim.
With our solution, we identified incremental cash information for non-PAN accounts using fuzzy matching. We also recognized Shell companies, Benami properties, cash splits, and other such patterns using link analysis and pattern mining. This helped to establish high-risk fund flow channels in a large, connected set of accounts.
A leading pharmaceutical company in the US
Based on the members’ profiles, past purchases, and conditions, we identified pharmaceutical items for upsell.
Customer acquisition, expansion, and segmentation in a major UK-based CPG enterprise
Fractal built a Consumer 360 knowledge graph related to company purchases. This helped the organization find loyal customers across different brands and regions and enabled lead generation and customer segmentation.
Detection of tax evasion and non-compliance for an Indian Government entity
Fractal identified potential cases of tax evasion and non-compliance for an Indian Government entity. The approach was to identify entities with cash deposits, not in line with their income using pattern matching.
2 Building a Knowledge Graph with Spark and NLP: How We Recommend Novel Drugs to our Scientists (slideshare.net)
8
© 2023 Fractal Analytics Inc. All rights reserved
How to build knowledge graphs
create data may be less than 100% accurate. Knowledge graphs developed out of such data may have inaccuracies that get carried forward. Over time, cumulative inaccuracies can lead to significant errors. There is also the challenge of disambiguation, where the exact words or phrases carry different meanings, e.g., turkey can be a bird or a country. Hence, disambiguation of such data becomes necessary. Moreover, data can come in different languages, apart from being different in structure. Hence, it is crucial to translate it into one common language before further processing. Charting the way ahead Integrating data across the enterprise and supporting complex decisions, knowledge graphs will drive the next wave of technological advancement. And organizations eyeing scalable implementations to solve enterprise data challenges will adopt an agile approach toward knowledge graphs. Having said that, one must have a strong business case to create them, as knowledge graphs involve considerable time, expertise, and money. While large organizations with excessive data can build their graphs, data-driven medium and small enterprises can opt for open-source knowledge graphs and customize them based on their use case. This gives every organization the wherewithal to develop knowledge graphs that will prove valuable in their business decision-making.
Structured data in a business has a relational database and follows an enforced schema, while unstructured data has no fixed schema. Both the data are combined to extract entities and form relationships between them. These entities and their relationships conform to ontologies that define the schema of knowledge graphs. Ensuring data consistency and data model understanding, ontologies serve as the basis for instances of knowledge graphs. Knowledge graph development calls for a consider- able investment of time and money.
Steps
Collect all sources of data — structured and unstructured. B ased on ontologies, organize data to a common standard.
1
2
Apply reasoning algorithm to derive new knowledge from the data and extract entities and relationships between the data. Apply semi-automatic and manual data validation methodologies to establish the correctness of the extracted knowledge.
3
4
Challenges faced
With data growth, knowledge graphs need regular updating to cover incoming data. The algorithms used to
Knowledge graphs can easily capture a small amount of data. But as data scales up, they may need help to expand. Hence deploying knowledge graphs at scale becomes essential.
9
© 2023 Fractal Analytics Inc. All rights reserved
Data drift: Identifying. Preventing. Automating.
Author: Snehotosh Banerjee | Principal Architect | AI @scale
Data drift is the unexpected changes in the data pattern, data structure, and semantics where the data fed into the model differs from the initial information. The past 24 months of disruption have given a new dimension to the data drift challenge. Businesses grappled with changing consumer behaviors, leading to changing data patterns that could disrupt complete processes. However, the question is, how can we architect for change, manage data drift and even harness its power to accelerate digital transformation for your business?
Observability permits teams to interpret and explain unexpected behavior and effectively and proactively manage data. Even though drift prevention may not be completely possible, it can be managed to a large extent.
At Fractal, we capture all types of drift but covariate shift is the most prevalent and widely used. This is mainly done at the feature store level to anticipate drift by comparing the distribution of representative data sets. With the right observability strategy, translates to higher reliability, improved consumer experience and scaled productivity.
Detecting data drift
There are multiple ways to detect data drift. One of the approaches is using statistical tests that compare the distribution of baseline data to the live or production data. If we see there is a significant difference between the two distributions, then a drift has occurred. Data drift detection can happen due to three broad things. There could be an observation gap, the freshness or relevance of data in the current time and the quality of data.
Freshness Check
OBSERVABILITY
With consumer behavior changing dynamically and rapidly, model performance degrades over time. Businesses must regularly check the freshness and data volume and monitor changes in the data schema. If there are any changes in the schema, it can lead to a potential data drift. There are few models that can last for a long time, without any update, like computer vision or language models. Model quality metric is the ultimate measure and it can be accuracy, mean error rate or even downstream business KPIs.
FRESHNESS
DATA QUALITY
Figure 1: Data drift detection
Observability
Data Quality
Data can be static or dynamic, however irrespective of their nature, it is subject to variations. What differs is the intensity of these variations. Businesses should start with data observability early to understand and spot these changes.
At certain times, data fed into the serving model may be skewed or there may be distribution changes compared to the training data. Hence, we can say that wrong data is a data quality issue. Data that is incomplete, incorrect or full of duplicates can lead to data drifts.
10
© 2023 Fractal Analytics Inc. All rights reserved
The main concern of data quality is if the data is ‘right’ or ‘wrong.’ A few common sources of data quality issues include: Incorrect data entry Quality control failed to remove data quality issues Duplicate data records created Data is not used or interpreted correctly All available data about an object was not integrated Data is too old to be useful in the current context It is critical to have quality assurance for features from both ethical aspects and from anticipating the drift probability.
may not be possible because the more flexibility we try to bring in through automation, the more complex the engineering can get. It is important to understand how much flexibility desirable vis-a-vis the system’s complexity is. Hence, understanding the requirement very well is the first step. It will help us estimate how much to go ahead with. So, there must be a balance between flexibility and complexity. Even if businesses achieve 70 to 80% automation, that would reduce repetitive work, and ensure data quality and data monitoring. To drive results with data drift automation, we need these five pillars of AI engineering or machine learning operations in place.
Challenges faced
Pillars of AI and engineering
The industry has numerous tools, and a business has different teams. These teams use different tools to handle each component, making it difficult to keep tight integration among the data. Moreover, a hundred tools are doing the same work, and there is no clear benchmarking yet. Organizations tend to choose tools by not understanding the actual requirement. This makes things complicated.
Solid Data curation layer (Data lake, Data warehouse) Feature Store and Feature QA Model training, Management CI/CD Monitoring If we overcomplicate the ecosystem, it becomes difficult to manage. The art is to keep things simple and still deliver value.
How automation can help
The automation journey has already started. However, adoption is where businesses are facing challenges, both from technical and operational sides. Cloud providers are moving towards automation. However, 100% automation
How we can help At Fractal, a lot of importance is given to monitoring and observability. It is not only limited to the model but the entire machine learning lifecycle. Currently, we are implementing a monitoring ecosystem in two projects and have created a complete E2E ecosystem of ML lifecycle and monitoring as part of the internal initiative of the CoE.
We used all open-source tech stacks and entirely on Kubernetes. Our goal is to move towards CNCF and use all SOTA tools with good community support.
11
© 2023 Fractal Analytics Inc. All rights reserved
Useful considerations Data scientists develop features and patterns by transforming raw data into meaningful ones. For flexibility, they resort to feature engineering. However, when data is vast and complex, they feel it is best to automate with easy-to-adapt designs. The mantra is not to complicate and find solutions where integration is seamless.
Develop data features
Feature engineering for flexibility
CONSIDERATIONS
Easy to adapt designs
Automation to reduce dependency
Conclusion Most businesses are adopting machine learning for key business decisions. However, the limited data quality and lack of ability to evaluate the quality of data, creates trust issues. In the upcoming time, there will be a continuous transition of technology with data automation really taking off. But for all that to happen, the quality of data, simplicity and flexibility of solutions, and easy adaptability are key. What is required is the collaborative work of data scientists and data engineers, expanding their boundaries and entering into each other’s domain to improve quality and model implementation. The key focus will remain productionization of the model with regular maintenance.
12
© 2023 Fractal Analytics Inc. All rights reserved
Quantum computing is here. Is your business ready?
With the technological advances in front of us, businesses can reap true value from quantum computing in the next five years. Is your business prepared for this next big wave?
Authors: Prateek Jain | Lead Architect | AI@Scale • Srinjoy Ganguly | Senior Data Scientist | AI@Scale
Businesses are ranking quantum computing in data science, machine learning, and AI high in priority. By 2023 Gartner 3 predicts 20% of organizations to budget for quantum computing projects. However, are organizations equipped to identify problems that can leverage quantum algorithms? Are they ready yet to extract maximum business value from quantum computing initiatives?
Figure 1: Quantum computing and its key application areas
Quantum computing in brief
Computational Chemistry / Biology • QC in Life Science • QC in Biology
• Design Better Drugs • Design new materials
Basis quantum mechanics, quantum computing enables computations to be performed much faster and more efficiently than classical computing. Unlike the logical state and two possible values that classical computing represents, quantum computing represents a two-state quantum mechanical system and has properties like the spin of an electron (up or down); low or high (energy states of trapped ions); horizontal or vertical (polarization of a photon). The most basic unit of quantum information, Qubit, can simultaneously exist in a combination of quantum states; and groups of qubits in superposition can create infinite, complex, multidimensional computational spaces. Quantum computing draws the computational speed and power from the combination of its special properties of superposition and entanglement. Application of quantum computing An ever-expanding quantum-computing ecosystem promises to generate substantial value and revolutionize industries, especially pharmaceuticals, finance and banking, alternative materials, and energy research. Life sciences are one of the industries where quantum computing is expected to be used extensively.
Cryptography • Quantum Key Distribution • Quantum Cryptographic algorithms
Alternative Material & Energy Research • Efficiently convert atmospheric CO 2 to methanol • Nitrogen Fixation • New catalysts • Artificial Photosynthesis
Finance & Banking • Dynamic portfolio optimization • Risk management • Option pricing for complex derivatives
Quantum Communication • Using Quantum Teleportation • Quantum Internet • Satellite communication
Supply Chain & Logistics • NP-hard Scheduling & logistics problems • Vehicle Routing Problem
3 https://www.gartner.com/smarterwithgartner/the-cios-guide-to-quantum-computing
13
© 2023 Fractal Analytics Inc. All rights reserved
Applying quantum computing in life sciences As Quantum Computing matures in both Qubit quality and quantum volume, pharma companies are exploring the application of quantum computers and software in chemistry and biology. Understanding the structure of proteins and how they fold is crucial to treat diseases such as Alzheimer’s, Huntington’s, Parkinson’s, Cancer, and even COVID. Alphafold, with its deep-learning-based algorithm, is bound by several approximations of the neural network and takes a huge amount of computing power and time. These are instances where classical computers or supercomputers are not powerful enough to make progress. Only Quantum Computing systems, with its ability to process huge amounts of information and data within a short time, can be the way forward.
Frameworks and Applications: We have designed frameworks focused on the study of chemistry — the structure of molecules — and life sciences. Fractal-orchestrated Application Layers dwell on discovering and simulating drug-target interactions, chemical interactions, protein folding, protein interactions, and compound generation. Being co-related, the layers will eventually branch out into several applications performing chemical and biological simulations. Algorithms: We have identified algorithms applicable to protein folding and chemical interaction issues. Efforts are underway to develop different combinations and better variations of the algorithms with an aim to predict workable molecules and proteins. Libraries & Frameworks: Leveraging reputed Machine Learning libraries like Pennylane, Qiskit, our Quantum AI team is developing full-stack application-focused frameworks. Fractal has access to matured life sciences and open-source chemistry frameworks by Google, Rosetta Commons. This enables the open-source frameworks to interface with libraries for simulations and chemical approximations. Hardware Stack: Our hardware stack accessible via Cloud has two models - Adiabatic Model, which is restricted and has hard-to-predict behavior at scale; and the Gate Based Circuit Model, which has predictable behavior at scale. Fractal is more inclined to Gate Model as it is closer to a Universal Quantum computer and gives the flexibility to control each qubit. Fractal efforts in other areas of quantum computing Our Fractal Quantum AI team is studying Generative Quantum machine learning for various problems and applications in quantum chemistry and finance, and the study of many body systems. Demonstrating quantum advantage over the classical algorithm, simulated molecular interaction was found between a subset of HIV and a hypothetical antiretroviral drug. You can find our entire work under Quantum Computing in HealthCare — Protein Folding Part-1 and Quantum Computing in HealthCare — Protein Folding Part-2 . Also underway, is the protein fold prediction with simulated annealing & D-Wave’s Adiabatic Quantum annealer.
Quantum computing is expected to provide higher level of clarity regarding the workings of biological processes, leading to accurate, fast-tracked cures for diseases.
Fractal contribution toward life sciences
At Fractal, we believe quantum computing can disrupt the life sciences industry through its increasingly accurate protein folding predictions, and breakthrough drug discoveries. Toward that, we have designed frameworks and applications, identified algorithms, and are developing libraries and frameworks.
14
© 2023 Fractal Analytics Inc. All rights reserved
The challenge While structural and technical problems are common in quantum computing applications, there are a few other challenges that appear simple but can hamper progress. Lack of awareness: Companies are aware of quantum computing, but the majority struggle to understand what it means for their industry and business, or whether their classical problems can be graduated to quantum problems. There is a pressing need to enhance awareness among business leaders on the magnitude and depth of quantum computing, and what it can do for them. Lack of understanding: Not all pharmacists, biologists or industry professionals understand the potential and possibilities of quantum computing and how to reap noteworthy benefits from this technology. Shortage of talent: While the industry produces quantum computing professionals, those with specialization in physics and chemistry are few. The shortage in talent acts as a tough barrier to the growth of the quantum industry.
Conclusion While the possibilities are limitless and the technology is still evolving, the real worth of quantum computing can be felt by businesses in the next five to ten years. Till then, efforts to create awareness of how the technology works and the kind of problems it can solve have to continue. To leverage the power of this technology, a growing network of quantum computing workforce is a must.
Our approach Fractal Quantum AI research lab is developing ‘Day after tomorrow Research & Solutions.’ For this, we have been investing and executing deep research to develop application-focused frameworks on life sciences, financial services, and optimization solutions. While the research and application outcomes will benefit life sciences, pharmaceuticals, material sciences, and financial services industries, the optimization arenas like supply chain, vehicle routing, workforce scheduling would also stand to gain.
15
© 2023 Fractal Analytics Inc. All rights reserved
Enable better decisions with Fractal 500® companies. Fractal's vision is to power every human decision in the enterprise, and bring AI, engineering, and design to help the world's most admired companies. Fractal's businesses include Crux Intelligence (AI driven business intelligence), Eugenie.ai (AI for sustainability), Asper.ai (AI for revenue growth management) and Senseforth.ai (conversational AI for sales and customer service). Fractal incubated Qure.ai, a leading player in healthcare AI for detecting Tuberculosis and Lung cancer. Fractal currently has 4000+ employees across 16 global locations, including the United States, UK, Ukraine, India, Singapore, and Australia. Fractal has been recognized as 'Great Workplace' and 'India's Best Workplaces for Women' in the top 100 (large) category by The Great Place to Work® Institute; featured as a leader in Customer Analytics Service Providers Wave™ 2021, Computer Vision Consultancies Wave™ 2020 & Specialized Insights Service Providers Wave™ 2020 by Forrester Research Inc., a leader in Analytics & AI Services Specialists Peak Matrix 2022 by Everest Group and recognized as an 'Honorable Vendor' in 2022 Magic Quadrant™ for data & analytics by Gartner Inc. For more information, visit fractal.ai
Corporate Headquarters
Other Locations
Australia, Canada, China, India, Singapore, UK, Ukraine
Get in touch
Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Page 8 Page 9 Page 10 Page 11 Page 12 Page 13 Page 14 Page 15 Page 16 Page 17 Page 18 Page 19 Page 20Made with FlippingBook - PDF hosting