February 5, 2024 By: Avneet Nehel and Craig Zawada
In an era already dominated by technological advancements, artificial intelligence (“AI”) has emerged as a powerful force shaping our lives. While AI offers unprecedented benefits and conveniences, it also raises significant concerns about privacy, particularly in a country like Canada, which prides itself on its commitment to individual freedoms and data protection. As Canadians await passing of the Artificial Intelligence and Data Act as part of Bill C-27 and the Digital Charter, we explore the risks privacy faces in Canada in the absence of AI regulations. While some rules exist in specific sectors like health and finance, there is no current regulation to address the risks during design and development of AI systems.
Before reviewing the specific instances where privacy is at risk, one of AI’s chief foundations must be noted. Generative AI like ChatGPT rests on huge collections of data, gathered from numerous collections and the internet itself. The scope of the data is a problem because there is more risk of personal information being included. The vastness of the datasets also makes it less transparent. Humans are almost incapable of reviewing what is covered, at least in any reasonable way.
1. Surveillance and Data Collection
A primary danger of AI in Canada lies in the realm of surveillance and extensive data collection. As AI systems become increasingly sophisticated, the capability to collect and analyze vast amounts of personal data raises concerns about the erosion of privacy. Surveillance technologies, powered by AI, can track individuals’ movements, behavior, and even predict future actions, with obvious privacy implications. Organizations need to be careful that consent is reasonably obtained and surveillance is reasonably notified. For example, Google Tag Manager, used on tens of millions of websites to manage third party JavaScript entries, has been found to include data leaks, security vulnerabilities, arbitrary script injections and instances of consent for data collection enabled by default. Personal privacy can suffer, as a result.
2. Biometric Recognition and Facial Recognition Technology
The deployment of biometric recognition and facial recognition technology poses a significant threat to privacy in Canada. AI-driven systems can now identify and track individuals based on their facial features, fingerprints, or even behavioral patterns. While these technologies may enhance security in places, they also open the door to mass surveillance and unauthorized monitoring, jeopardizing the fundamental right to privacy. Storing such data is a growing concern. Any hack or breach of this data puts individuals at a greater risk of identity theft and use of their information for criminal purposes. Consider TikTok, which holds a massive amount of facial and personal data, all stored outside of Canada. Procido has already written an article on this danger.
3. Algorithmic Bias and Discrimination
AI algorithms are only as unbiased as the data they are trained on. Concerns have also been raised about the potential for algorithmic bias, leading to discriminatory outcomes. If AI systems are trained on biased data, they can perpetuate and amplify existing social inequalities. This poses a risk to privacy by disproportionately affecting certain groups of people, potentially reinforcing stereotypes and hindering equal access to opportunities.
For example, an AI should not identify a race as a particular one to focus on unless they are programed to do so. If programed to focus on a specific race, the algorithm might ignore other races, generating results which are racially biased.
That is not the whole problem, however. Part of generative AI is that it “learns”, or adapts answers with new input and according to patterns it recognizes over time. This is a form of programing, but even the developers of AI tools do not fully understand how answers are generated. Put another way, bias from programing arises when the technology is developed, plus every day that it augments its results through machine learning.
Plus, the dataset bias mentioned earlier adds to the problems. A Washington post article noted that, during an experiment, an AI repeatedly chose a Black man’s face as a criminal. It was found that these bots were trained on data across the internet, and developed an inherent bias based on sex and race. Like the data it subsumes, AI is not neutral or unbiased. If that bias is not recognized and corrected for, serious issues can arise.
4. Invasion of Personal Spaces
The proliferation of smart devices and the Internet of Things (IoT) introduces new challenges to privacy in Canada. AI-powered devices, such as smart home assistants, constantly collect data on users’ preferences, habits, and daily routines. The intimate nature of this information raises concerns about the invasion of personal spaces and the potential for unauthorized access to sensitive data, creating a fertile ground for privacy breaches. Procido’s recent article on recording of personal data by your automobile shows how far collection of personal data has gone. Intimate moments of individuals are captured without true consent or knowledge of the individual. It is important that organizations differentiate between necessary data for improving service or research versus excessive or intrusive data collection that violate individuals’ privacy.
5. Lack of Regulation and Accountability
As AI technologies advance at an increasingly rapid pace, the legal and regulatory frameworks in Canada struggle to keep up. The lack of comprehensive legislation specifically addressing the challenges posed by AI to privacy leaves individuals vulnerable. An absence of clear guidelines and accountability measures for AI developers and users increases the risk of unchecked data exploitation and misuse. The Digital Charter which is set to regulate AI through the Artificial Intelligence and Data Act in Canada has been stuck at deliberation in Parliament since 2022. Although Canada has developed guiding principles on generative AI, their adoption is voluntary.
Conclusion
While AI brings numerous advancements that can enhance our daily lives, it is crucial to address the associated dangers to privacy in Canada. Striking a balance between technological innovation and protecting individual rights requires a concerted effort from policymakers, industry leaders, and the public. Establishing robust regulations, promoting transparency, and fostering public awareness are essential steps to safeguard privacy in the face of the growing influence of AI in Canada. Governments, organizations and individuals must act now to ensure that the benefits of AI do not come at the cost of compromising the privacy and fundamental rights of Canadians.
Disclaimer
This publication is provided as an information service and may include items reported from other sources. We do not warrant its accuracy. This information is not meant as legal opinion or advice. Contact Procido LLP (www.procido.com) if you require legal advice on the topics discussed in this article.

