News Opinion social care Technology

Whose Data Is It Anyway?

Karolina Gerlich, Chief Executive, The Care Workers’ Charity

Karolina Gerlich, Chief Executive at The Care Workers Charity, explains why care workers are drawing the line on AI — and what must change to protect dignity, trust and professional judgement.

Artificial Intelligence is increasingly shaping how care is planned, delivered and monitored. From predictive analytics and digital care records to automated decision-support tools, AI is often presented as a solution to workforce pressures and system inefficiencies. Yet as AI moves deeper into adult social care, a fundamental question is coming into sharper focus: whose data is being used, and on whose terms?

For care workers, this question is not abstract. It goes to the heart of dignity, autonomy and trust, both for the workforce and for the people they support.

Care workers already operate in environments defined by high skill, responsibility, limited resources and intense scrutiny. The introduction of AI into this landscape risks adding another layer of pressure if it is deployed without care worker involvement, transparency or meaningful safeguards.

Data generated through care work is profoundly human. It includes intimate details about people’s health, behaviour, relationships and daily lives. When this data is repurposed for algorithmic systems, often without clear explanation or consent, care workers are reduced to data collectors rather than recognised as skilled professionals exercising judgement, empathy and ethical reasoning.

There is growing concern that AI tools may be used to monitor performance, standardise practice or make implicit judgements about risk and quality, without adequately reflecting the realities of care. Systems designed without care workers’ input risk undermining professional autonomy and reducing complex human relationships to metrics and scores.

Trust is central to care. People drawing on care trust care workers with their most personal information, their vulnerabilities and their stories. Care workers, in turn, must be able to trust that the systems they are required to use will not compromise those relationships.

Responsible AI in adult social care must therefore begin with clear boundaries. Care workers are increasingly clear that data gathered in the context of care should not be extracted, analysed or shared in ways that conflict with the purpose for which it was given. Consent must be meaningful, not assumed. Transparency must be genuine, not buried in technical language.

Crucially, care workers are also asking where accountability sits when AI influences decisions. If an algorithm flags risk, prioritises tasks or shapes assessments, who is responsible when harm occurs? Without clear answers, AI risks shifting liability onto care workers while stripping them of agency.

In response, care workers are drawing a clear line around how it should be developed and used.

In May 2024, care workers came together in collaboration with Oxford University’s Centre for Ethics in AI and The Care Workers’ Charity to develop guidance and a shared statement of expectations on the responsible use of AI, including generative AI, in adult social care. This guidance sets out clear expectations for employers, technology developers, local authorities, policymakers and regulators. It makes explicit that ethical AI must be grounded in the realities of care and shaped with care workers, not imposed upon them. Training, transparency and clarity are central to this approach. The full statement can be accessed via The Care Workers’ Charity website.

Care workers are calling for AI systems that support, rather than replace, professional judgement, for data governance frameworks that respect human rights and for design processes that embed their voices from the outset. They are also highlighting the risk that poorly designed digital tools could deepen existing inequalities, particularly for migrant care workers and those already subject to heightened surveillance and insecurity.

As AI continues to evolve, the sector faces a choice. Technology can either entrench existing power imbalances or help build a system rooted in dignity, trust and shared accountability. Care workers are clear about what responsible AI should look like: transparent, proportionate, rights-based and co-produced. Anything less risks eroding the very foundations of care.

The future of AI in adult social care is not simply a technical challenge. It is a moral one. Care workers are no longer willing to be passive subjects in decisions about data, technology and design; instead, they are asserting their role as ethical leaders in shaping innovation.

Further Reading

  • Care Workers’ Guidance and Statement of Expectations on the Responsible Use of AI in Adult Social Care (May 2024)
    Developed in partnership with Oxford University’s Centre for Ethics in AI and The Care Workers’ Charity.

Playbook

Shawbrook

Email Newsletter

Twitter