What you need to know about responsible innovation and AI-ethics for open source projects
New technology such as Artificial Intelligence promises to bring us remarkable advancements to society and solve pressing problems. However, as we have seen in recent years, the technologies also have a large potential for unintended consequences that can harm individuals and society as a whole. AI is challenging the meaning of human progress: while technology has long been understood as "good" for human progress, current techological advancements inherently press human rights such as privacy, the right to non-discrimination, not to begin about its carbon footprint and many other ethical concerns.
Nextcloud, too, is facing a dilemma about the rising popularity of services like ChatGPT and how to offer our users similar features and user experiences as Big Tech, while also guaranteeing user rights and freedoms and generally trying to do the right and ethical thing.
Daphne, TEDx speaker who wears two hats as the leader of the AI team at Nextcloud and academic researcher in this field, will share her experience on how to best navigate this new era of innovation while staying true to our core principles and values.
We will explore how to weave the ethical considerations, regulatory frameworks, and responsible practises into our development.