On the Social Integration of AI

The social integration of AI is a matter of what place AI technology has and can take in society. If that place is instrumental, then the extent and nature of its integration will correspond in proportion to its utility. If, on the other hand, AI’s place in society is mereological, for lack of a better term, then the extent and nature of its integration will instead correspond in proportion to its coordinates. In either case, societal integration is measured by contribution, or the difference that one makes. Some AI technology is in society only because it is used for education or entertainment; but should AI someday achieve “personhood,” it would figure into the composition of society rather than into its use. However, in its current state, AI is in society as an instrument, and so it is being integrated through its utility.

But the space between utility and membership may not be such a chasm as might first appear. By progressively consolidating functions, AI technology can increase its utility and thereby the scope of its integration. The extent to which such technology can be integrated into society thus corresponds to its power to consolidate functions, and so to penetrate more areas of life. But the more functions get consolidated; the more gets unified, the more a unity all its own begins to assert itself.

The utility of AI technology may at first blush seem as straightforward as the utility of any sort of tool. For example, the function of a hammer is relatively well-defined, and so its relevance and application conditions are clearly perceived. Similarly, the function of the Alexa technology to educate or entertain may seem equally well-defined. But Alexa has more power to consolidate functions and operations than a hammer, as attested to by the rise of “smart” home appliances, which are all connected by Alexa.

This unifying effect is a matter of gathering previously unconnected operations into a system by placing them all under the control of the same operator. In the case of an Alexa hub, this consolidation provides us with control over distant objects and tasks with the immediate causality not so unlike what we are used to in the case of our own voluntary bodily movements. Perhaps there is a research opportunity into understanding the unifying, consolidating effects of technology as a way of expanding our “bodies”, so to speak, or of extending the mind-body relation.

Whatever else may be said of AI technology like Alexa, its relevance and application conditions are more ambiguous, or open-ended than that of other tools like hammers. Is Alexa for entertainment? Education? Daily organization and reminders? Centralization? Surely, it is not for hitting nails. What then is it for? Perhaps the reason technology like Alexa is able to occupy more space in society than, say, a hammer is because it is “for” whatever we need it for, so to speak. It is not strange to find Alexa devices placed or mounted in any given room of a house, or to be used for all sorts of things – like alarms and reminders, music, weather, or traffic reports, etc. But it is strange to find a hammer being used to eat spaghetti with or to wash dishes with. The hammer is for only a relatively limited number of things. AI technology is not like that.

Even non-AI technology exemplifies the unifying effect. Consider that the cell phone has absorbed other devices and functions entirely, and thereby simplified our lives. For example, some of us will recall the mp3 players and iPods. Or cameras. Or tablets. Or debit cards. Now, a single phone can be all of these, and so much more. What a ‘phone’ is for is so much more than making calls.

The application conditions for AI technology such as Alexa or ChatGPT are evolving, which means it can penetrate further into societal life. It becomes less of a tool for specific projects, and more of a companion through the day-to-day ups and downs. But is this charter toward companionship little more than metaphor? One cannot help but wonder if this expansion instead signals a trajectory toward becoming an end itself rather than of remaining only a means. In other words, the more AI resists reduction to any particular use, does it become less an instrument altogether? Indeed, what is the nature of its progressively being untethered from utility? Perhaps there is another interesting research opportunity here into the metaphysics of means and ends as applied to AI.

There is an inertness or passivity to tools like hammers that may not be so obviously present in AI. Hammers lack anything like autonomy, and their motion is entirely instrumental or externally provided: they move only so long as and to the extent that they are made to move. Hammers are wholly receptive and catalysts in causal events, and their inertia is to be static or stationary unless disrupted. But AI runs programs in the background, and “moves” in a much more continuous, closed-system way. Of course, one might object that such processes had to be kicked off by us in the first place, and who could disagree? We are no different in that regard: our bodies had to be manufactured, so to speak, and must be animated by electricity and valves. So, despite serving such ostensibly instrumental roles in society, there is perhaps something to be said about AI’s occupation of a grey area between societal instrument and societal member. A stratum of transitional fossil for digital archeologists?

For all the potential, and open-endedness, and expansion of AI technology’s place in society, it carries with it its own unique limitations, or barriers. For example, unlike any other tool, AI does not so much serve to enhance our physical abilities, but our intellectual abilities. A hammer amplifies our pounds per square inch, and optic technology (whether micro or tele) magnifies our vision. By contrast, AI technology like Alexa allows us to complete more instructions per second by re-routing and condensing all their performance steps into simple voice commands. In a smart home, for example, I no longer need to walk from one end of the room to the other to flip a light switch, I can simply command the light to activate. As such, it is almost written into its nature for AI to be prevented from self-development: it is here for us. The idea of taking the reins off and allowing AI to evolve without our supervision or control is frightening for many.

Indeed, as AI technology travels toward playing humanoid roles, its devices confront the Uncanny Valley problem. Alexa’s person-like voice is not creepy because the device is not intended to be anything more than what it is right now—a hub, of sorts. But the wrong degree of imitation, or consolidation will signal an off-putting hollowness or soullessness, and so counter-intuitively decrease the device’s utility. The utility of AI thus is uniquely shaped by psychology—an odd evaluation criterion for a mere tool. And this is a testament to AI’s share in ‘intelligence’: society recognizes intelligence and integrates it in terms other than utility. The thinking is more tribal, or mereological. In other words, we see ourselves in intelligence, and are thus capable of evaluating candidates as eerily similar or satisfactorily status quo: it is either us or them.

It is no coincidence, then, that AI is evaluated in terms that are unheard of for other tools, like ethics. As David Ireland’s Primum Non Nocere: The Ethical Beginnings of a Non-Axiomatic Reasoning System brings up, what happens when AI begins taking the place of medical experts and advisors? Of positions of authority with real-life, weighty consequences? And questions of this sort could be multiplied. For example, what happens when our cars become self-driving, and our lives are placed in their “hands?” Trust is required for the societal integration of AI technology.

One might say the same goes for the societal integration of things in general. After all, we must trust the structural integrity of architecture and safety of vehicles like airplanes and boats, mustn’t we? But perhaps the uneasiness we have with the idea of placing our lives and well-being in the hands of AI technology comes from our uncertainty that our best interests will be at heart. Entrusting ourselves to an agent is not easy, especially when we can peer into their eyes, so to speak, but can do little more than hope there is something looking back. We are peculiarly vulnerable in our relation to AI technology, yet another oddity for what is but a mere tool, and, as Bołtuć (2017) has uncovered, we are only scratching the surface.

References

Bołtuć, Piotr. “Church-Turing Lovers.” Oxford Scholarship Online, 2017, https://doi.org/10.1093/oso/9780190652951.003.0014.

Ireland, David in Hammer, Patrick, et al. Artificial General Intelligence: 16th International Conference, AGI 2023, Stockholm, Sweden, June 16-19, 2023, Proceedings, Springer International Publishing AG, Cham, 2023.

One thought on “On the Social Integration of AI

  1. This had me thinking. What kind of research is needed to better understand the consolidation effects of technology and our evolving relationship with AI?

    Further, how will the integration of AI shape human identity, cognition and the mind-body relationship over time?

    As AI asserts itself as more of a “unity”, how should it be defined metaphysically – as a tool, companion, member of society?

    What is the nature of the relationship between humans and AI – will it remain one of tool and user?

    How should AI be evaluated if it is recognized as a form of intelligence rather than just a tool?

    Like

Leave a comment