Easier said than done


Although generative AI is easy to implement, integrating it into existing processes while maintaining company standards in terms of quality or security remains a project in itself. (Photo: Growtika / Unsplash)

With ChatGPT’s breakthrough, most companies have established an AI strategy, but its implementation still raises many questions. In terms of data preparation, skills or even mastery of security issues.

AdvertisingOf course, generative AI has a magical side. You plug it in and everything seems to work right away – which in computing is unusual to say the least. But looks are deceiving. Because to transform technology into a vector of efficiency, the wow effect cannot be enough, as shown by a morning of discussions recently organized by TNP Consultants. Generative artificial intelligence is above all a marketing coup from the United States, says Benoit Ranini, the president of this company founded in 2007 and employing 750 people. The technology is available, ready and cheap. But we must now measure what we really get on the value chains. And address certain issues associated with the implementation of AI, such as data quality.

Thus at Carrefour, which uses generative artificial intelligence to overcome one of the difficulties associated with food e-commerce websites: the time needed to create its list of products. A chatbot that draws from a recipe database supports internet users in creating their basket via API calls sent to the e-commerce site, explains Sbastien Rozanes, the group’s Chief Data Officer. The application, which offers customization features, was simple to build, with a lead time of just 6 weeks between project start and production. The main problem is not about building the engine, but about the data preparation work. The topic is also cited by Benoit Ranini as one of the conditions for success with AI strategies, along with the understanding of regulatory issues, measurement of the value created and the tool strategy.

SNCF clashes with SLM

For many subject matter specialists, another key priority lies in training managers and business teams in technology. By the end of 2021, the entire group’s top 150 were trained on data for 2 days. It was what helped me the most when I arrived at Carrefour, assures Sbastien Rozanes. Although at that time generative AI technology was not yet widely available. At Oral, the internal university of technology and data also has a program dedicated to the group’s managers, but it also aims to train all employees for generative AI, which the cosmetics group is trying to tame with a chatbot called L’OralGPT, used by 15,000 people daily , according to Stphane Lannuzel, the company’s ‘Beauty Tech’ director. As soon as ChatGPT was released, we sent employees a list of dos and don’ts with this technology. But it was especially the don’ts that interested us, he blurts out.

AdvertisingBecause technology is cause for concern. Especially in terms of data exfiltration, this has pushed a number of large French companies to open chatbots on private entities. In areas of expertise, such as TGV maintenance, we are increasingly studying SLM, the Small Language Model. For economic issues, but also for sovereignty issues. Our tests indicate performance similar to LLMs on specialized subjects, says Henri Pidault, Group CIO of SNCF. By limiting the number of language model parameters, SLMs simplify the training phases, limiting the cost and ecological impact of using generative AI. For Laurent Daudet, CEO and co-founder of LightOn, a French publisher of LLMs, the security challenge of public generative AI even goes beyond just data. “Since we have both input and output data, the entire know-how can be threatened,” he assesses.

Measure the risks without freezing any project

However, the integration of artificial intelligence into the daily life of companies goes beyond this simple issue of data exfiltration, as illustrated by Guillaume Poupard, the deputy general manager of Docaposte. The former ANSSI highlights the example of Pronote, a software that manages the relationships between teachers, parents and students of about 10,000 middle and high schools, software that was transferred to the Docaposte fold in 2020: we started an experiment aimed at discovering students who dropped out and we are able, thanks to technology, to identify them three months on average before the first reports. But this is only the easy part of the project. The question today is who to warn and how to use this information to really reduce school dropouts? To monitor the use of technology, Docaposte has set up an ethics committee, headed by Professor Jean-Gabriel Ganascia. It is important to understand and measure the risks, says Guillaume Poupard. But this must not become a factor that blocks all development. Otherwise, Europe will once again find itself lagging behind the US.

Share this article

Source link

Leave a Comment