In the context of digital onboarding, demographic features such as ethnicity, age, gender, socioeconomic circumstances, and even camera/device quality might affect the software’s capacity to match one face to a database of faces i.e. AI Bias. The quality and resilience of the underlying database in various sorts of surveillance might feed bias in the AI models. Biometrics are used in modern face recognition software to map facial traits from an image or video.
Artificial Intelligence’s ability to augment and support progress and development over the past few decades is inarguable. However, when does it become damaging, contradictory even? In our latest Beyond Data podcast AI’s Climate Jekyll & Hyde – friend and foe, Tessa Jones (our VP of Data Science, Research & Development) and Sophie Chase-Borthwick (our Data Ethics & Governance Lead) discuss exactly this with Joe Baguley, Vice President and Chief Technology Officer, EMEA, VMware.
This blog has been written by an independent guest blogger. Since its advent, the debate over its ethical and unethical use of AI has been ongoing. From movies to discussions and research, the likely adversarial impact AI has had over the world has been a constant cause of concern for every privacy and security-conscious person out there.
Egnyte has been developing and using AI technology with Machine Learning (ML) for quite some time. We use it internally to detect sensitive information for our customers so that policies can be put in place to protect that information, and we continue to find new ways to implement these models to better support our customers.
Digital transformation has driven the rapid adoption of cloud-delivered services like SaaS/IaaS/PaaS in enterprises. This, in turn, has resulted in the migration of digital assets (aka data) from the confines of enterprise data centers to the cloud data centers that are not under the control of the enterprises. Additionally, the onset of the COVID-19 pandemic has resulted in remote work becoming the norm.