Reminder of the different types of AI commonly used:
Machine learning (ML): with machine learning, machines discover their own algorithms, or models, on the basis of data that enable them to
"learn" the problem they are trying to solve. The more data they receive, the greater the potential results. Initially, the learning process may involve humans 'labelling' the correct results. Over time, this human element becomes less necessary, as the results obtained become progressively more accurate.
Large Language Model (LLM): based on deep learning - a subset of ML, LLMs are pre-trained. They use neural networks made up of tens of millions of parameters to process large volumes of data in parallel. Whether they operate in self-supervised or semi-supervised learning mode, their objective is not simply to obtain knowledge. They seek to reproduce contextual facets of knowledge: syntax, semantics and other ontological features.
Generative AI: technology capable of generating media (text, images, etc.) in response to a prompt, after learning the structures of its training data. Based on the data provided by users in the prompt, the AI applies ML techniques by processing the data via neural networks. The resulting media can be used in a wide range of applications: creating inspired works of art, code for software design, writing documentation, articles and reports, enriched with quotes - and much more besides.
What are the main risks with the introduction of AI?
1. Scope of accessible data
There is a great temptation to give access to as much of the company's data as possible, and thus to give access rights that would not be justified.
2. Obsolescence and loss of ownership of analysed data
The data analysed by AI must be of very high quality, its lifespan well controlled and its owners clearly identified.
3. Corruption of learning data
Learning data (supervised learning, unsupervised learning and reinforcement learning) is crucial to the production of results by AI. Poorly protected and corrupted, it will produce completely unpredictable results.
4. Fragility of the AI interface (HMI)
AI HMIs are still very rudimentary and easily hackable. This would allow a hacker to make the AI work for hacking purposes.
Oxial's digital solution has been adapted to better address these new risks by controlling the perimeter of data accessed and securing the AI HMIs and their learning data containers.