Dr Paulius Jurčys: A New Social Contract. Reclaiming Ownership of Our Data

Sukurta: 10 June 2025

Robotų kavinė autoriaus asmeninio archyvo nuotrIn the age of rapidly advancing artificial intelligence (AI), where powerful algorithms can predict human behaviour and changes in the world around us, we find ourselves in a paradoxical situation: we live under the illusion that our personal data belongs to us, yet in reality, it remains beyond our reach. We have little genuine influence over how technology companies like Facebook, Amazon, OpenAI, Google, Apple, and others use our data.


Who really controls our data?


This raises a fundamental question: if we do not own our data, then who does?


Picture this: a few months ago, someone bought the latest Japanese Mitsubishi Outlander model. Every time the engine is started, a message appears on the screen: ‘All your vehicle data is collected for product development and research purposes. If you wish to limit data transmission to Mitsubishi Motors, press the INFO button...’ A car used daily has suddenly become more than just a vehicle – it is now a mobile data collection platform. However, is it really desirable for our driving habits and even personal routes to end up stored in the databases of large corporations?


Another example concerns the fairness of social media platforms. In September 2024, LinkedIn quietly updated its terms of use and announced that users’ posts and other profile information would be used to improve the AI models used by LinkedIn. When the news was revealed by ‘The Verge’ journalists, many of LinkedIn’s 930 million users felt betrayed. ‘Why wasn’t I asked for consent?’ they wondered, sparking debates around fairness in the data market. This case again demonstrated how users are often excluded from decisions that directly affect their privacy.


There are many similar incidents. Consider a controversial case of Scarlett Johansson. In autumn 2024, OpenAI introduced a new voice-controlled version of the ChatGPT app. One of the voices available to users sounded strikingly similar to the voice of Johansson’s character from the movie ‘Her’. The actress publicly expressed her disappointment: after OpenAI CEO Sam Altman invited her multiple times to record her voice and she firmly declined, OpenAI went on to find another actress with a nearly identical vocal tone, whose voice was then ‘coincidentally’ used. This incident raises the question: can AI truly be considered to have creative freedom if it is based on imitating someone else’s work or identity?


These examples symbolically illustrate the inequality in the digital world: the data people generate is raw material for technological advancement. The current data system is designed to consolidate the status of tech companies. These corporations act as data controllers, while the rights granted to users under legislation, such as the General Data Protection Regulation (GDPR), are limited and ineffective. For some time now, it has been quietly acknowledged that the ownership of user data is more of an illusion than a reality – the data generated by users of digital platforms and smart devices, in fact, belongs to the big companies.


What is the real value of our data?


Everyone knows that every step we take and every click in the digital space is tracked. Tech giants like Google, Apple, Facebook, Amazon, Microsoft, and others also monitor our heart rates, movement across the city, and even our facial expressions. They spend billions of dollars each year to ‘improve’ their services, while collecting our data under so-called ‘legitimate’ grounds, just like Mitsubishi’s use of data for research and development purposes. At the same time, these corporations attempt to convince users that their digital footprint is essentially worthless. Public reports show that basic personal data, such as age, gender, and location, is typically valued at only a few cents, while sensitive data, e.g. payment history or health information, might be worth just a few dollars per month.


Yet people tend to value their personal data much more highly. Recent studies by Harvard University Professor Cass Sunstein and Angela Winegar highlight a massive gap between how companies and individuals perceive data value. Using concepts from behavioural economics – willingness to pay (WTP) and willingness to accept (WTA) – the researchers found that while study participants were willing to pay just $5 per month to protect their data privacy, they would insist on as much as $80 per month to give up access to that same data.

This 16:1 ratio is among the highest ever recorded in behavioural economics and demonstrates how highly individuals value their personal data (more than clean water or air!). The scientists explain this phenomenon through the endowment effect, where people place greater value on what they already own compared to what they do not.


Data ownership has become one of the key issues discussed in the era of AI breakthroughs. Public debates increasingly emphasise that personal data should be treated as pers

onal property. Our data has become an integral part of our digital identity. Rapid technological progress is now making it possible to design a new data model that is human-centred rather than business-oriented. This model’s basic assumption and guiding principle is that all personal data is private by default and accessible only to its owner.


What if all data truly belonged to us?


Imagine a world where our data is truly private, i.e. no one has access to it except us. In this world, we would have complete dominion and control over our data and decide who can access it. Individuals would gain real power to manage their data and could benefit from it directly. This idea of data dominion and ownership is grounded in a technological transformation of the current data system; the essence of the revised model is to ensure that the data is owned and managed by the individuals themselves rather than corporate entities.


One of the main aspects of this idea is a personal data storage account. Each person would have their own digital wallet – a kind of personal data vault – in which data from different sources could be stored: social media and app usage histories, banking and payment patterns, health records, data from smart wearables and IOT devices, etc.


By allowing individuals to consolidate their data and leveraging AI technologies, individuals could ‘communicate’ with their data, meaningfully interacting with it and using it to their advantage. For instance, health data might help detect early signs of health issues, financial information could support better planning and budgeting, and other types of data could assist in making informed decisions in everyday life.


Another important aspect of this idea is complete control over personal data. A human-centric data architecture would shift from the current opt-out model, where users must constantly ask companies not to collect their data, to an opt-in model, giving users control to decide who can access it. This data privacy and ownership model is already gaining traction in tech ecosystems worldwide and opening up new markets for personalised services and products. These emerging ways of using personal data provide a glimpse into a future that we will share with new AI-powered life forms: robots, personal AI twins, and AI assistants.


Welcoming AI assistants and agents


Just a few years ago, engaging with virtual spaces primarily meant logging into digital networks or game platforms where we could create avatars and immerse ourselves in imaginary worlds. But over the past decade, the line between the virtual and the real world has begun to blur. Today, we all carry the digital realm in our pockets through smartphones and ubiquitous connectivity through perpetual internet access.


In the age of AI breakthroughs, we must recognise that new synthetic forms of life are already among us: smart devices (e.g. robot vacuum cleaners or wearable health trackers like smartwatches or rings) and AI-powered assistants that can answer any question or even plan our summer vacation for us. In Japan, ‘weak robots’, whose primary function is to reduce social isolation, are becoming increasingly popular. For example, the round and cuddly robot named ‘Nicobo’, which has been developed by Panasonic, is designed to communicate with humans and provide emotional support. In Tokyo, there is a unique café called ‘Cafe DAWN’ that uses robots and AI tools to take customer orders with assistance from remote employees with reduced mobility, while some of the meals are prepared and served by the robots themselves.


The Japanese government has long acknowledged that in a rapidly ageing society, the notion of ‘zero-risk technology’ is an illusion; every technology comes with its own set of advantages and drawbacks, and some degree of risk is inevitably involved. Nonetheless, the Japanese authorities and partners from both the public and private sectors are working to harness emerging technologies to unlock new opportunities for addressing the most complex social problems.


The coexistence of humans and AI agents


How should we welcome the new synthetic forms of life entering our world? Did AI truly come to destroy humanity? Or will it actually help us overcome global challenges by, for example, reducing environmental pollution or slowing climate change? Let’s look at this issue pragmatically and see how AI and smart technologies can already change our lives.


To start, consider the provision of healthcare. Today, the US healthcare system is extraordinarily costly and primarily controlled by private insurance and pharmaceutical companies. It pays little attention to improving the quality of life or preventing diseases. Instead, an ill person pays for an expensive surgery, and is discharged to recover at home – a ‘repair-and-fix’ model. However, if people could use smart devices and receive guidance, insights, and encouragement from AI assistants on better care for their health, we could expect longer and healthier lives.


Another example is demographic change. Let’s face it: society will never have enough teachers, doctors, and nursing staff. AI could offer support here as well. Imagine a nurse visiting a patient at home using AI technologies to quickly fill out the diagnosis form and register the necessary data. That would allow more time for human connection, like a warm touch and words of comfort.

 

A new social contract


Amid the current technological transformations, pressing concerns inevitably emerge, compelling us to ask pertinent questions reflecting on our values and the guiding principles needed to address the ethical, social, and economic challenges posed by AI. At the historic crossroads of eras, societies return to fundamental values: human dignity, social justice, inalienable natural rights, and property protection. As we stand at the intersection of AI innovation and ethical considerations, one truth becomes clear: we should not fear that AI will completely replace us. On the contrary, we must have the courage to experiment boldly, continue to develop and innovate, and integrate cutting-edge technologies so that we can use them to shape the future we wish to live in.


In this future vision, humans and data become the cornerstones of a new social contract. Only through constant exploration, bold experimentation, and the courage to ask new questions can we build a stronger social structure based on mutual trust, respect, and shared responsibility for the future of our world.


Feel free to ask questions related to the topic of this article directly to Paulius Jurčys’ AI knowledge twin here.