<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
Article
5 min read
Mario Rugeles

A few months after artificial intelligence (AI) tools – among them ChatGPT as surely the most popular example – started taking the world by storm, reality starts to sink in, and the apocalyptic prophecies about software developers being forced to change careers now call for a revision.

 

We are not yet seeing the long-term consequences of large language models (LLMs) like ChatGPT in society, but mass media surely has already made a good share of profit click-baiting people with articles about how their professions will soon be obsolete, software developers included. And it’s quite easy to feel worried about it; ask ChatGPT to write the code you need, and its response will likely make you question your career.

 

What’s not easy – and every software developer knows this very well – is finding the limits of any tool, AI included. That takes some time to figure out, and a few months after ChatGPT has become hugely popular, it seems that in practice, the role of AI will be more about assisting rather than replacing.

 

Let’s take this example: if you ask ChatGPT to code a neural network to identify events in a video stream, like a robbery, you will get some code that very likely suits your need. But you’ll have to be very specific about how you want that implementation. In other words: you need to have a deep understanding of what a neural network is and how to develop in TensorFlow or PyTorch. You need to know Python, concepts like long short-term memory (LSTM) and how to make a classifier over a stream of data; otherwise, you won’t understand and verify what the model is giving you.

 

In practice, AI is not of much use without an expert by its side. An AI model may be able to code a neural network, but you need a data scientist to make the best use of it, and the same applies to every field in software development.

 

Walk before running

 

The real challenge is to define how AI can assist your teams with software delivery. And this task is far from trivial but unavoidable, as having the competitive edge will likely come down to who finds the best and safest way to use the benefits of AI systems.

 

Even more, some of the challenges are not even technical ones. Privacy and IPR issues, for example, are one of the major blockers to adopting AI systems. You don’t have control over what happens to the data you send to ChatGPT: the Samsung fiasco, where Samsung employees accidentally leaked sensitive information, is a painful reminder of the risk of using AI systems. So, in order to truly support your teams, you need to implement rules. And that includes making sure you’re not infringing data protection regulations like GDPR and CCPA – this responsibility goes well beyond the typical technical considerations.

 

So, the fast adoption of AI can expose really quickly how prepared (or unprepared) organisations are when dealing with sensitive issues like privacy and security. Nevertheless, there are some options available to minimise risks related to privacy and security. For example, initiatives like Hugging Face allow companies to build their own models in-house using publicly available models they can run in their own infrastructure.

 

Relying completely on the outcomes of AI also entails significant risks: as intelligent as all these tools may seem, they still are narrow AI. They just do one task and one task only. Sure, ChatGPT delivers amazing responses, and yes, we are talking about a truly complex architecture, but at the end of the day, it’s a machine learning (ML) model that just predicts the next most probable token for a given text.

 

That’s a narrow AI model. ChatGPT doesn’t know anything, or even everything – on the contrary, it only does one thing: predict the next probable word. Amazingly, that’s all that it does! That’s part of the reason why models ‘hallucinate’: their response may deviate from their training data, but the response, although incorrect, still approximates a ‘coherent’ sentence. Such AI models do not deliver the best response possible, but the best guess available to them. Above a certain size, these LLMs may manifest some emergent behaviours, but they fundamentally lack a world model that would make the leap to a general AI.

 

Also, narrow AI systems will consider correct what we tell them is ‘correct’, as their quality depends on the data used to train them. That makes these systems overly prone to providing biased results, so knowing how to identify the quality of the outcomes is imperative to avoid biases that inevitably emerge from building an ML model.

 

Measuring the real impact

 

You also need to measure how much an AI system really helps your teams in improving your productivity. To that end, you need to conduct well-designed experiments to measure the real impact of using AI for delivering your product to production.

 

After adopting AI in your teams with well-defined rules, you may want to ask questions like:

 

  • How has the use of AI improved a team’s sprint velocity?
  • Has the average number of detected bugs changed significantly?
  • What is the impact on security vulnerabilities and security bugs?
  • If the code has not improved significantly, is it due to limitations in the AI system (bias) or improper usage (prompt engineering)?
  • Does AI help mitigate the impact on my team shape? Like losing a team member? Or supporting new ones?

 

Such questions are instrumental when adopting AI systems. Any output created using these tools must meet the quality standards that software products are meeting today. All comes down to balancing opportunities and risks, and both are currently highly present. Many players are entering the field looking to minimise risks while maintaining or increasing opportunities. Making intentional steps to balance opportunities and risk when adopting AI in our delivery process requires a good dose of rationality and asking the right questions to measure the real impact of AI to truly meet our needs and improve everyone’s work.

 

With all the possibilities ahead, it’s okay to imagine best- and worst-case scenarios – speculation is useful up to a point. But when you have the responsibility of making sure your code works pristinely in production, speculation will bring only noise and you’ll probably miss the opportunity to make the most of what AI has to offer. For that, you need to combine possibility thinking with insight-based decisions.

 

No video selected

Select a video type in the sidebar.