The Social and Cognitive Impact of AI-Powered Services

Scaling AI Joel Belafa

Our society is increasingly reshaped by more and more sophisticated technologies and AI plays a great role in this process. In this series, we’ve aimed to raise the awareness of every stakeholder about the necessity of assessing the risks of AI

But it doesn’t stop with the service release and general availability. The precedent topics remain intuitively close to other robotics or software projects and tackle the basics of the ramp-up phase of any industry. 

It is, however, probably more challenging to talk about the risks of adopting in a microcosm, given how diverse its form could take. AI is expected to touch industries and services that even the internet never influenced. There’s an evident uncertainty caused by the fact that we have yet to imagine the limit of AI.

Still, there are some phenomena already observed that give clear directions on unwanted sociological effects and unnecessary (if not harmful) side effects caused by the adoption of an AI-driven service from an organization or a significant change in the nature of a service. 

glass shattering

Behavioral and Cognitive Impacts

I believe we need to start with a very long but necessary example. 

Let’s assume you live in the countryside and you are interested in politics. You have many sources of information like TV and newspapers, but your online activity also attests to it. You start getting content recommendations on regional politics, and you progressively change your habits in favor of online content. One day, there’s a sudden large political disinformation campaign in your region. You click on one of these, and later a recommender system associates these websites with other similar sources of fake news. Your navigation gets progressively polluted, and no other source of information can fight it. You may not even be aware of this customization, but your political knowledge is entirely different within a few months. 

Such a scenario could apply to any system influencing our decision-making process. 

A mindful integration of AI in decision-making is generally in the form of helpers or smooth suggestions integrated into an application an operator or his team is already familiar with. They   would progressively follow the recommendations and improve them with their choice, and the system would improve by analyzing the impact over time.

You would typically observe that productivity dramatically increases, and the virtuous reduction of the number of what people would call ungrateful tasks increases the level of comfort and the value of the user or consumer. 

The potential danger comes with this massive change of behavior. We can see in some cases: 

  • The development of a dependent (if not addictive) behavior toward these services
  • The incapacity to get out of AI influence 
  • The decrease in the use of some cognitive functions of a consumer

If some would eventually welcome a dependent behavior on a particular business case, we know that some people could suffer severe side effects. We also need to mitigate the risks raised by the eventual unavailability of the service or the AI support. 

The concerns on the impact on our cognitive system are to be considered thoroughly, as they may pose severe public health issues no matter the scale of the deployment of the service. 

Furthermore, we should consider the indirect scenario of such an adoption. Adopting a technology sometimes causes the development of the wrong reflexes toward others. A young child accustomed to touchscreens would immediately expect similar behavior from every type of screen. Someone accustomed to safe transactions on the internet might not be wary of scams on other websites. 

Obsolescence and Normalization 

Along with the potential change of behavior, a mechanical impact on the chains of events that  justified the use of AI could be observed after a persistent use or deployment of the service. If this service is meant to prevent a type of event from occurring and is successful in doing so, the data generated past this date will be totally different. Furthermore, some indirect adaptation could be observed, in line with a change of behavior but not directly from the user of the service. 

We imagine a situation where a fraud detection system that is very good at preventing a specific type of fraud leads to a complete abandonment of this method in favor of another fraudulent method. In another scenario, a recommender system could make you lose many customers in a segment by its lack of integration of new or different products for that segment essentially because it has a huge impact on the products displayed to the consumers.  For industries like fashion, the paradigm could change quickly and the historical purchase could be the worst indicator at a point in time. 

We can also name services that provide a strategic advantage at a certain moment but then turn back into a neutral and defensive service once the competition adopts a similar service. The general risks around this mechanism are not trivial, but they must be anticipated when a return on investment is calculated in your project.

Up, Up, and... Everyday AI

AI is actually one of the most prolific grounds to demonstrate human creativity. Each day comes with an innovative application of AI in our world and our chances of being exhaustive will decrease with time. Still, we hope that the seed of responsibility we planted will grow along with your business and technical knowledge.

You May Also Like

Digital + Data/AI Transformation: Parallels and Pitfalls

Read More

Stay Ahead of the Curve for GenAI Regulation in FSI

Read More

Taking the Wheel Back With Dataiku's Model Override Feature

Read More

Explainable AI in Practice (In Plain English!)

Read More