For a better browsing experience and to benefit from all the features of credit-agricole.com, we advise you to use the Edge browser.
  • Text Size
  • Contrast

Artificial intelligence transforms technical professions without removing their fundamentals. It speeds up code writing, content analysis, test generation, or system oversight, but it does not replace judgment or accountability. Its value depends above all on the ability of organizations to define rules of use, monitor results and preserve human understanding of the systems they design.

At Devoxx 2026, several keynotes approached AI from complementary angles: ethics, software development, agents, digital tattooing and code reading. Their convergence lies in the same reading: models only create value when they are part of explicit, verifiable and transmissible practices.

Laurence Devillers, professor of artificial intelligence at Sorbonne University, researcher at CNRS (National Center for Scientific Research), LISN (Interdisciplinary Laboratory of Digital Sciences) and president of the Blaise Pascal Foundation, laid the ethical and cognitive foundation of this sequence. AI calculates, recombines and amplifies. However, it does not understand what it produces and bears no responsibility of its own.

AI is neither angel nor demon. She calculates to answer. She has no intention, no emotion, no understanding of what she is saying, and no responsibility either. You are responsible when you ask the machine to do something. You have to tame it, understand how it works and not desensitize the human eye to what emotions are” says Laurence Devillers.

In the software development sector, Matthieu Vincent, Deputy Group Software Engineering CTO and Platform Engineering CTO at Sopra Steria, and Yann Gloriau, Technical Director France at Sopra Steria, proposed a more operational analysis. Two years after the announcements of massive replacement of developers by AI, the teams remain present. AI has not mechanically generated more tests, documentation or application modernization. It has especially amplified the practices already installed.

It’s not the tool that makes the project, it’s the project that makes the tool. Teams that were already producing unit tests continued to do so. However, those who did not write it did not do so. The same is true for documentation, modernization or legacy processing: without a project approach, AI alone does not transform practices” says Yann Gloriau.

 

From Developer Assisted to Agent Orchestrator

OpenAI's keynote extended this thinking from a more forward-looking perspective. Théophane Gregoir, Forward Deployed Engineer at OpenAI, presented the coding agent “Codex”. In this environment, the engineer is no longer limited to requesting code completion. He prepares agents, provides them with context, teams them with plugins, assigns them missions and supervises several tasks in parallel. The developer function then comes close to a role of AI agent manager.

The time when we were wondering how to improve the developer, or how to just make him a faster coder, is coming to an end. Now the topic is more how to manage a team of sub-agents and AI teammates to work with them. This includes providing them with an integration framework, tools, execution plans and code review mechanisms” says Théophane Gregoir.

 

Vibe coding: a risk of detachment

However, this move towards orchestration of agents does not eliminate risks. It moves them towards the precision of the instructions, the quality of the reviews and the ability of the developer to remain involved in what is produced. Matthieu Vincent thus insisted on the risk of detachment introduced by “vibe coding”, a practice which consists in delegating a large part of the writing of the code to the AI from instructions that are sometimes very vague.

Very quickly, the code generated is no longer perceived as its own by the developer. AI becomes a form of executor to which we delegate tasks, while our attention shifts elsewhere. This distance can lead to a loss of involvement in the design, quality and understanding of what is actually produced” notes Matthieu Vincent.

This loss of control can take many forms. In the code, it concerns the understanding of what has been generated, the ability to reread, test and assume the result. In the content AI produces, it moves to another question: how do we know if an image, voice or text comes from a generative system when the content circulates, transforms and leaves its original environment?

 

Watermarking, an invisible seal for tracing AI-generated content

At Meta, Pierre Fernandez, Research Scientist at FAIR Paris (Fundamental AI Research), and Tom Sander, Research Scientist at Meta Superintelligence Labs, FAIR, work precisely on this provenance, content protection and watermarking. Metadata easily disappears when changing platforms or taking a screenshot. Passive detection also becomes more fragile as the images, voices and texts generated get closer to human productions. Digital tattooing offers another approach: write an invisible signal in the content, then find it statistically after certain transformations.

The principle is to integrate an invisible watermark into the image from generation onwards so that it can then be checked for presence. If this trace is found, we can deduce that the image was generated by AI. In the case of an image, digital watermarking means modifying it in a way that is imperceptible to the human eye, while retaining a detectable trace even after certain transformations or transmissions between users” explains Pierre Fernandez.

The text imposes a different mechanism. It is no longer a question of modifying a few pixels, but of acting on the token generation by token by the large language models, with a pseudo-random choice correlated with a private secret held by the owner of the model. This logic then makes it possible to measure statistically whether a text comes from a given system.

 

Make generated code understandable

This issue becomes even more sensitive as AI produces more code. The more teams delegate writing, the more they must preserve the conditions for reading, reviewing and understanding. Nicolas Delsaux, architect and technical advisor at Zenika, as well as Clément Bout, software engineer and DevOps advisor at Zenika, recalled that reading code is not a neutral operation. The brain does not function as an interpreter: it does not store all the information, but reconstructs blocks of meaning from what it recognizes.

This constraint makes readability even more important when the code is generated by IA. Pull requests, diagrams, Fluent APIs, and Architecture Decision Records (ADRs) are not just documentation. They reduce the cognitive load and make the code more transmissible.

These few keynotes point to the same trajectory: AI takes on an increasing share of execution, but in turn increases the requirement for human supervision. Technical maturity is no longer measured only by the ability to use a model, but by the quality of the framework around it: explicit rules, reliable tests, review procedures, living documentation and traceability of content and code. The question is therefore not about the erasure of technical professions, but about their evolution around three requirements: to exercise judgment, to produce evidence and to convey an understandable context.

Associated Articles

AI
Devoxx: Crédit Agricole combines AI, Clean Code and software quality
29 Apr 2026

If you wish to exercise your right to object to the processing of personal data for audience measurement purposes on our site via our service provider AT internet, click on refuse