(and the Future of Freedom of Thought)

BY BISMA SHOAIB AND DANICA NOBLE
It is hard to write about artificial intelligence (AI) without using hyperbole, and it is also difficult to find a comprehensive definition of AI.11 www.merriam-webster.com/dictionary/artificial%20intelligence: โThe capability of computer systems or algorithms to imitate intelligent human behavior.โ But it is not difficult to conclude that AI is already fundamentally reshaping societal norms and revolutionizing our interaction with the world. As companies actively engage in providing and leveraging AI systems, significant opportunities emerge for enhancing productivity, gaining new insights, creating new capacities, and even solving old problems. This transformation is happening fast. The AI paradigm shift is already impacting our daily lives in both overt and subtle ways. The potential for leaps forward in progress offers profound benefits but also introduces notable risks and vulnerabilities.
We experience the benefits of AI-powered systems every day. Scientists have utilized AI to discover a new class of antibiotics effective against drug-resistant Staphylococcus aureus, marking the first breakthrough in antibiotic development in over 60 years.22 www.nature.com/articles/s41586-023-06887-8. AI-driven XO Exam System has revolutionized eye care, enhancing diagnostic accuracy and early disease detection while more effectively meeting global demand and expanding access to ophthalmology services in rural and non-health-care settings.33 https://xophthalmics.com. Using a machine-learning method to assess DNA, AI can even enable real-time classification of brain tumors during surgery, aiding surgeons in identifying tumor types and adjusting their strategies in the moment.44 www.nature.com/articles/d41586-023-03072-9.
While not always visible, AI and machine-learning algorithms are already touching your lifeโdetermining things like the ads you see online, the interest rate you receive on a loan, whether you get a call back on a job application, the prices you see online, and even the surge pricing for an Uber. As another example, machine learning is employed to safeguard our email accounts.55 www.forbes.com/sites/bernardmarr/2019/12/16/the-10-best-examples-of-how-ai-is-already-used-in-our-everyday-life/. Similarly, Google Maps and other travel apps utilize AI to track traffic, providing real-time updates on traffic and weather conditions.66 Id. Most recently, ChatGPT became the fastest-growing app in history, beating out Google, Instagram, and TikTok.77 https://time.com/6253615/chatgpt-fastest-growing/.
At the same time, AI has the ability to supercharge fraud, amplify discrimination, create nonconsensual and harmful images, aid attacks on digital infrastructure at enormous scale, disrupt democracies, and yes, maybe someday enable killer robots.
Given AIโs widespread application and its potential for harm, legal practitioners should be working to understand the technical aspects of AI and to become proficient in the swiftly developing legal standards applied to it. As counsel to companies using or producing AI, practitioners must adeptly apply the existing legal frameworks to safeguard against AI misuse.
This article will offer practical guidance for navigating the intricacies of AI with respect to competition, consumer protection, and even basic human rights.
AI and Competition Law
In October 2023, the WSBA Antitrust, Consumer Protection, and Unfair Business Practices Section hosted a mini-CLE with speakers from government, technology, academia, and private practice to discuss the intersection of AI and competition law. To some degree, the intersection is approaching rather than upon us, but there has been plenty of ink spilled in speculation. As companies across various sectors increasingly integrate AI into their core business operations, they may be able to leverage benefits such as enhanced operational efficiencies, cost reduction, streamlined processes, improved customer experiences, and optimized profitability. However, access to the AI building blocksโhuge amounts of data, processing and computing power, and specialized talentโis not widely distributed, and concentrated market structures can raise antitrust concerns. For example, the chips that power most foundational generative AI models are currently made in highly concentrated markets, and the supply does not satisfy the demand.88 www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns; www.cnn.com/2023/08/06/tech/ai-chips-supply-chain/index.html. As a result of chip supply being highly concentrated, the market may be vulnerable to anticompetitive conduct.99 www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns. Further, the companies with the largest cloud computing capacity may also be some of the platforms with the largest access to the talent and the data that large-language and other base AI models require.
A flurry of public statements from the Federal Trade Commission (FTC) and the Department of Justice (DOJ) over the last year has addressed potential anticompetitive risks from AI. In February 2023, DOJ Principal Deputy Assistant Attorney General Doha Mekki addressed the potential antitrust risks arising from companies utilizing AI for data aggregation and making collaborative decisions impacting pricing and output.1010 www.justice.gov/opa/speech/principal-deputy-assistant-attorney-general-doha-mekki-antitrust-division-delivers-0; See also, www.ftc.gov/business-guidance/blog/2024/03/price-fixing-algorithm-still-price-fixing. Mekki highlighted findings from several studies indicating that algorithms could induce either implicit or explicit collusion in the marketplace, leading to the possibility of increased prices or, at the very least, a weakening of competitive dynamics.1111 www.justice.gov/opa/speech/principal-deputy-assistant-attorney-general-doha-mekki-antitrust-division-delivers-0. Mekki underscored the DOJโs commitment to facilitating businesses employing AI for innovative and competitive uses.1212 Id. Simultaneously, she emphasized the agencyโs resolve to intervene and prevent the misuse of AI that could adversely affect fair competition.1313 Id.
The FTC has also pointed to potential antitrust risks linked to AI.1414 In June 2023, the FTCโs Technology Blog highlighted potential competition issues linked to generative AI. The article pointed out that control over essential inputs, such as data, could lead to illegal barriers, hinder innovation, and enable questionable practices like product bundling. See www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns. FTC Chair Lina Khan emphasized the necessity for both state and federal enforcers to maintain vigilance in the early stages of AI development, ensuring that businesses adhere to existing laws.1515 www.nytimes.com/2023/05/03/opinion/ai-lina-khan-ftc-technology.html. Khan has emphasized that the FTC is well equipped with existing authority and expertise to address issues arising from the swiftly evolving AI sector, specifically those related to collusion and unfair competition practices.1616 Id. Furthermore, the FTC discussed platform/network effects and open-source dynamics, cautioning against tactics where companies leverage AI resources as open-source initially but later close their ecosystem, restricting competition and leading to lock-in.1717 “Generative AI Raises Competition Concerns,” FTC, June 29, 2023, www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns.
In January of this year, the FTC launched a market study inquiry into generative AI investments and partnerships under its statutory authority of Section 6(b) of the FTC Act.1818 Section 6(b) of the FTC Act gives the FTC special authority and tools to conduct studies, separate from the agencyโs law enforcement authority. Under Section 6(b), the FTC can issue an order to require a company to file annual or special reports, in writing, to answer specific questions about various aspects of the companyโs business conduct. See www.ftc.gov/news-events/news/press-releases/2020/02/ftc-examine-past-acquisitions-large-technology-companies. The agencyโs 6(b) studies enable deeper understanding of market structure, trends, and business practices, which may be used to inform future Commission actions. See www.ftc.gov/news-events/news/press-releases/2024/01/ftc-launches-inquiry-generative-ai-investments-partnerships. This FTC 6(b) investigation seeks to analyze corporate collaborations and investments involving AI providers to understand the associationsโ effects on competition.1919 www.ftc.gov/news-events/news/press-releases/2024/01/ftc-launches-inquiry-generative-ai-investments-partnerships. Compulsory orders for information were issued to Alphabet, Inc.; Amazon.com, Inc.; Anthropic PBC; Microsoft Corp.; and Open-AI, Inc.2020 Id. Similarly, Assistant Attorney General for Antitrust Jonathan Kanter stated that the DOJ has initiated multiple investigations into competition in AI.2121 https://news.bloomberglaw.com/antitrust/ai-antitrust-probes-are-underway-doj-says-without-specifying.
To anticipate potential antitrust risks associated with AI use, antitrust attorneys should adopt several strategic measures. Attorneys should initiate detailed antitrust risk assessments tailored to their clientsโ AI applications, examining factors such as AIโs role in decision-making, competitor data reliance, dataset size, industry AI adoption rate, and human involvement in decisions.2222 See www.oecd.org/daf/competition/algorithmic-competition-2023.pdf. They must scrutinize data sources, including aggregated third-party data sources, pricing algorithms, and revenue-management tools employing AI, to ensure accurate and unbiased data usage and to guard against misuse or collusion allegations.2323 Id. It is important to update antitrust compliance policies and training programs to educate employees across relevant departments on the antitrust risks posed by AI. It is equally important to educate consumers of products as well.
Antitrust counsel should actively participate in AI development, implementation, and marketing processes, especially when linking AI applications or transitioning from open-source to proprietary ecosystems, to assess and mitigate risks of such technology. Likewise, clients should be encouraged to collaborate with antitrust and licensing counsel to secure appropriate compliance representations and indemnification in AI product and service licenses.
Consumer Protection in the Age of AI
On Feb. 14, the WSBA Antitrust, Consumer Protection, and Unfair Business Practices Section hosted another mini-CLE, this time on consumer protection and generative AI, exploring the nearly boundless applicationsโand substantial implicationsโfor consumer protection. While AI enables consumers to benefit from tools like chatbots and rapid (automated) decision-making, it also introduces challenges. These may include algorithmic opacity, embedded biases, and privacy-invasive practices. It will be necessary for counsel to help companies balance approaches leveraging AIโs benefits against potential risks.
One case illustrating the serious consequences of algorithmic decisions was reported in 2019. A study published in Science demonstrated that a widely used algorithm that helped determine health care for some of the most seriously ill Americans discriminated based on race.2424 www.science.org/doi/10.1126/science.aax2342. The research demonstrated that software guiding additional and fast-tracked health care services for more than 10 million Americans systematically advantaged the care of white patients over Black patients, resulting in worse outcomes for Black patients.2525 www.wired.com/story/how-algorithm-favored-whites-over-blacks-health-care/. Many hospitals use algorithms to identify primary care patients with complex health needs to provide additional support.2626 www.nature.com/articles/d41586-019-03228-6. Analysis of more than 50,000 patient records showed that white patients were provided higher quality health care than similarly presenting Black patients based on the determinations made by an algorithm employed by many hospitals.2727 www.science.org/doi/10.1126/science.aax2342.
Regulatory bodies are diligently addressing the various adverse effects of AI on consumers. In a joint statement, the FTC, Consumer Financial Protection Bureau, Justice Departmentโs Civil Rights Division, and U.S. Equal Employment Opportunity Commission emphasized their collective enforcement efforts against discrimination and bias in automated systems, cautioning that AI could lead to unlawful discrimination.2828 www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf. The FTC specifically maintains that AI is subject to the same regulatory and legal principles designed to protect against deception and unfairness, emphasizing that it will regulate AI in a manner consistent with its approach to other products in the past.2929 www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai.
The FTC aims to deter companies engaging in unlawful development and deployment of AI through the imposition of meaningful penalties. For example, algorithmic disgorgement involves the systematic removal of the data as well as the algorithms used to monetize that data. The FTC issued an algorithmic disgorgement order to WW International, asserting that the companyโs mobile application, providing weight-management and tracking services for children, teenagers, and families, violated the Childrenโs Online Privacy Protection Act.3030 www.ftc.gov/news-events/news/press-releases/2022/03/ftc-takes-action-against-company-formerly-known-weight-watchers-illegally-collecting-kids-sensitive. The FTC alleged WW International marketed a weight loss app targeting children as young as eight and subsequently gathered their personal information without obtaining parental consent. WW International was required to delete data and the algorithm it was using for the app.3131 Id.
In another recent enforcement action, Rite Aid was prohibited from utilizing AI facial recognition technology by the FTC.3232 www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without. This action comes in response to the retailerโs use of the technology without sufficient safeguards, resulting in the misidentification of consumers, particularly women and individuals of color, as shoplifters over a period of eight years.3333 Id. The imposed ban will remain effective for five years.3434 Id.
AI technologies are facing heightened federal scrutiny based on international, domestic, state, and municipal frameworks such as President Bidenโs comprehensive executive order on AI issued in October 2023. These frameworks have some similar tenets, such as the eight guiding principles and priorities included in the White House executive order for how AI systems should be developed and deployed. The eight principles are:
1. Safety and security through robust evaluations and transparency;
2. Responsible innovation and competition, including investments in education and research;
3. Support for American workers amid AI-driven job changes;
4. Alignment of AI policies with equity and civil rights;
5. Enforcement of consumer protection laws against AI-related fraud and privacy infringements;
6. Protection of privacy and civil liberties in AI data handling;
7. Management of risks from government AI use and capacity enhancement for regulation; and
8. Leadership in global AI progress and collaboration.
Completely unbiased AI systems may be unattainable. When organizations employ AI systems for decision-making that may have legal implications of discrimination, it is imperative for attorneys to collaborate with data scientists during the development process. Ethics should not be an afterthought. Attorneys should engage with external parties like academic researchers and consumer advocacy groups or independent auditors to identify and address potential issues of bias, discrimination, or unfairness in AI models. Additionally, attorneys may recommend setting up internal ombudsman services dedicated to receiving and reviewing complaints from various groups involved, including employees and consumers.
AI, Neurotechnology, and Privacy
Advancements in neuroscience and AI have intersected, resulting in the emergence of consumer neurotech devices. Consumer neurotech devices connect human brains to computers, employing increasingly sophisticated algorithms for the analysis of received data.3535 www.nature.com/articles/s41928-023-00929-9.
Large platforms including Meta,3636 https://9to5google.com/2023/02/28/meta-smart-glasses-2025/. Microsoft,3737 www.microsoft.com/en-us/research/video/neural-interfaces-towards-a-new-generation-of-human-computer-interface/. and Apple3838 www.apple.com/apple-vision-pro/. are making significant investments in brain-tracking and decoding technology. While other biotech ventures like Neuralink (Elon Musk), Synchron (Jeff Bezos and Bill Gates), and Blackrock Neurotech have embarked on trials for human-implantable neurotech devices,3939 www.washingtonpost.com/business/2023/05/25/elon-musk-neuralink-fda-approval/; https://blackrockneurotech.com/; https://synchron.com/. consumers have already begun submitting to brain scans. French beauty and fragrance industry leader LโOrรฉal has established a strategic collaboration with Emotiv, a neurotech company, introducing in-store consultations utilizing multi-sensor EEG-based headsets to detect and decode customersโ brain activity through advanced machine-learning algorithms, aiming to personalize fragrance selection based on individual emotions.4040 www.globalcosmeticsnews.com/loreal-partners-with-emotiv-to-launch-fragrance-personalisation-device/. Similarly, Ikea offered limited edition art pieces, but only to consumers willing to don headwear that was used to detect whether they actually loved the art or instead were more likely making a speculative purchase for resale.4141 www.inc.com/betsy-mikel/ikea-refused-to-sell-its-new-rugs-to-anyone-whose-brain-waves-didnt-pass-their-test.html; See also, Nita Farahany, The Battle for Your Brain (St. Martin’s Press, 2023). Do these initial offerings suggest that some consumers are willing to trade brain data for products such as personalized perfume recommendations?
The potential benefits of consumer neurotech devices are profound. On the health side, companies are developing and marketing wearable gadgets capable of monitoring EEG signals, which have the potential to notify individuals with epilepsy about impending seizures.4242 www.sciencedirect.com/science/article/abs/pii/S0022510X21003051; www.neuro-help.com. Similarly, individuals with quadriplegia are beginning to operate electronic devices using their thoughts.4343 www.nature.com/articles/d41586-022-01047-w; www.biospace.com/article/braingate-successfully-tested-a-brain-computer-interface-for-quadriplegics/; www.wired.com/story/neuralink-implant-first-human-patient-demonstration/. In the workplace, neurotechnology promises advantages like fatigue tracking to avoid accidents and promote heightened concentration, improved emotional and cognitive skills, and reduced bias in recruitment processes.4444 www.welcometothejungle.com/en/articles/neurotech-performance-at-work.
Nevertheless, progress in consumer neurotech devices raises substantial privacy concerns, particularly in the context of data privacy, self-determination, and freedom of thought. While privacy challenges may arise from processing any personal data, the processing of brain data presents specific ethical concerns because it contains especially sensitive data. Collecting brain data raises questions about the capacity for informed consent, detection of unuttered or even unconscious thoughts, or unintentional revelation of sensitive health data.4545 www.frontiersin.org/articles/10.3389/fhumd.2023.1245619/full. There is a proliferation of claims that the detected brain signals can predict health (neurological) status, individual preferences, attitudes, and behavior.4646 Id. Coupled with AI and the emergence of consumer-grade brain data detection devices, there is also a proliferation of the collection, processing, and availability of neurodata expanding beyond clinical and research settings into medical, academic, and even commercial applications.4747 Anita S. Jwa and Russell A. Poldrack, “Addressing privacy risk in neuroscience data: from data protection to harm prevention,” Journal of Law and Biosciences 9, no. 2 (September 2022). Consequently, many novel legal and ethical issues are emerging and it is unclear if the law is keeping up.
For example, such technology raises questions about privacy and oversight in the workplace.4848 www.welcometothejungle.com/en/articles/neurotech-performance-at-work. While federal monitoring regulations provide employers with significant authority to monitor the activities
of their employees during work hours,4949 www.americanbar.org/digital-asset-abstract.html/content/dam/aba/events/labor_law/2016/04/tech/papers/monitoring_ella.pdf. these workplace surveillance laws primarily center on matters of consent.5050 Id. See also The Electronic Communications Privacy Act of 1986. Could employees consent to the surveillance of their thoughts and brain activity? Possibly. Meanwhile, more than 5,000 companies worldwide already use SmartCap, a wearable system that tracks brain signals to monitor employee fatigue.5151 Nita Farahany, “Neurotech at Work,” Harvard Business Review, March-April 2023, https://hbr.org/2023/03/neurotech-at-work.
As neurotechnology and AI rapidly converge, establishing definitive rights to cognitive liberty should be prioritized.5252 Liz Mineo, “Fighting for our Cognitive Liberty,” The Harvard Gazette, April 26, 2023, https://news.harvard.edu/gazette/story/2023/04/we-should-be-fighting-for-our-cognitive-liberty-says-ethics-expert/. Currently, the U.S. Constitution, state and federal laws, and even international treaties lack explicit recognition of a right to cognitive liberty.5353 Nita Farahany, “‘Cognitive Liberty’ Is the Human Right We Need to Talk About” Time, June 26, 2023, https://time.com/6289229/cognitive-liberty-human-right/; Ian Sample, New Human Rights to Protect Against ‘Mind Hacking’ and Brain Data Theft Proposed,” The Guardian, April 26, 2017, www.theguardian.com/science/2017/apr/26/new-human-rights-to-protect-against-mind-hacking-and-brain-data-theft-proposed. Establishing that right will better allow us to reap the benefits of neurotechnology without sacrificing the rights to mental privacy and self-determination over our own brains.ย
. . .
SIDEBAR
Join the WSBA Antitrust, Consumer Protection, and Unfair Business Practices Section
Legal professionals seeking to stay informed about the ever-changing AI legal landscape are encouraged to become members of the WSBA Antitrust, Consumer Protection, and Unfair Business Practices Section. Our recent CLE series has focused on addressing emerging issues in AI, including sessions on the intersection of AI with competition law and consumer protection. In a CLE in April, we will explore privacy and best practice issues in AI development and deployment in connection with neurotechnology and freedom of thought.ย Membership in the Section not only offers practical guidance for navigating emerging issues but also gives access to cutting-edge resources, networking opportunities, and a community of experts.ย
NOTES
1. www.merriam-webster.com/dictionary/artificial%20intelligence: โThe capability of computer systems or algorithms to imitate intelligent human behavior.โ
2. www.nature.com/articles/s41586-023-06887-8.
4. www.nature.com/articles/d41586-023-03072-9.
6. Id.
7. https://time.com/6253615/chatgpt-fastest-growing/.
8. www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns; www.cnn.com/2023/08/06/tech/ai-chips-supply-chain/index.html.
9. www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns.
10. www.justice.gov/opa/speech/principal-deputy-assistant-attorney-general-doha-mekki-antitrust-division-delivers-0; See also, www.ftc.gov/business-guidance/blog/2024/03/price-fixing-algorithm-still-price-fixing.
12. Id.
13. Id.
14. In June 2023, the FTCโs Technology Blog highlighted potential competition issues linked to generative AI. The article pointed out that control over essential inputs, such as data, could lead to illegal barriers, hinder innovation, and enable questionable practices like product bundling. See www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns.
15. www.nytimes.com/2023/05/03/opinion/ai-lina-khan-ftc-technology.html.
16. Id.
17. “Generative AI Raises Competition Concerns,” FTC, June 29, 2023, www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns.
18. Section 6(b) of the FTC Act gives the FTC special authority and tools to conduct studies, separate from the agencyโs law enforcement authority. Under Section 6(b), the FTC can issue an order to require a company to file annual or special reports, in writing, to answer specific questions about various aspects of the companyโs business conduct. See www.ftc.gov/news-events/news/press-releases/2020/02/ftc-examine-past-acquisitions-large-technology-companies. The agencyโs 6(b) studies enable deeper understanding of market structure, trends, and business practices, which may be used to inform future Commission actions. See www.ftc.gov/news-events/news/press-releases/2024/01/ftc-launches-inquiry-generative-ai-investments-partnerships.
20. Id.
21. https://news.bloomberglaw.com/antitrust/ai-antitrust-probes-are-underway-doj-says-without-specifying.
22. See www.oecd.org/daf/competition/algorithmic-competition-2023.pdf.
23. Id.
24. www.science.org/doi/10.1126/science.aax2342.
25. www.wired.com/story/how-algorithm-favored-whites-over-blacks-health-care/.
26. www.nature.com/articles/d41586-019-03228-6.
27. www.science.org/doi/10.1126/science.aax2342.
28. www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf.
31. Id.
33. Id.
34. Id.
35. www.nature.com/articles/s41928-023-00929-9.
36. https://9to5google.com/2023/02/28/meta-smart-glasses-2025/.
38. www.apple.com/apple-vision-pro/.
39. www.washingtonpost.com/business/2023/05/25/elon-musk-neuralink-fda-approval/; https://blackrockneurotech.com/; https://synchron.com/.
40. www.globalcosmeticsnews.com/loreal-partners-with-emotiv-to-launch-fragrance-personalisation-device/.
41. www.inc.com/betsy-mikel/ikea-refused-to-sell-its-new-rugs-to-anyone-whose-brain-waves-didnt-pass-their-test.html; See also, Nita Farahany, The Battle for Your Brain (St. Martin’s Press, 2023).
42. www.sciencedirect.com/science/article/abs/pii/S0022510X21003051; www.neuro-help.com.
43. www.nature.com/articles/d41586-022-01047-w; www.biospace.com/article/braingate-successfully-tested-a-brain-computer-interface-for-quadriplegics/; www.wired.com/story/neuralink-implant-first-human-patient-demonstration/.
44. www.welcometothejungle.com/en/articles/neurotech-performance-at-work.
45. www.frontiersin.org/articles/10.3389/fhumd.2023.1245619/full.
46. Id.
47. Anita S. Jwa and Russell A. Poldrack, “Addressing privacy risk in neuroscience data: from data protection to harm prevention,” Journal of Law and Biosciences 9, no. 2 (September 2022).
48. www.welcometothejungle.com/en/articles/neurotech-performance-at-work.
50. Id. See also The Electronic Communications Privacy Act of 1986.
51. Nita Farahany, “Neurotech at Work,” Harvard Business Review, March-April 2023, https://hbr.org/2023/03/neurotech-at-work.
52. Liz Mineo, “Fighting for our Cognitive Liberty,” The Harvard Gazette, April 26, 2023, https://news.harvard.edu/gazette/story/2023/04/we-should-be-fighting-for-our-cognitive-liberty-says-ethics-expert/.
53. Nita Farahany, “‘Cognitive Liberty’ Is the Human Right We Need to Talk About” Time, June 26, 2023, https://time.com/6289229/cognitive-liberty-human-right/; Ian Sample, New Human Rights to Protect Against ‘Mind Hacking’ and Brain Data Theft Proposed,” The Guardian, April 26, 2017, www.theguardian.com/science/2017/apr/26/new-human-rights-to-protect-against-mind-hacking-and-brain-data-theft-proposed.


