Newsletter
NENNA.AI Monthly newsletter - October
11.10.2024
Hello NENNA.AI fellows
Welcome to our October edition! This month’s newsletter includes updates from LinkedIn, ChatGPT and of course NENNA. Scroll down for more
AND Thank you for being a part of our community! 💜
TODAY'S MENU
LinkedIn
Jobs @ NENNA
California Bill
NENNA: Team completed
Data Security: ChatGPT data collection
What else was important
LinkedIn updates AI Training Policy (Opt-out Available for Non-EU Users)
Summary: LinkedIn recently updated its AI training policy, enabling the platform to use users’ data to train and fine-tune generative AI models by default for those outside the EU, EEA, or Switzerland. While users can opt out of this data usage, it does not affect the use of data for AI models used for personalization, security, or during direct interactions with AI features.Why is it important: This update raises concerns about data privacy, as it allows LinkedIn to use personal data for AI training without explicit user consent. Understanding and managing these settings is crucial for users who wish to maintain control over how their data is utilized, particularly in AI development, ensuring that their privacy preferences are respected.However, due to GDPR regulations, LinkedIn is not permitted to use the personal data of EU citizens for this purpose without explicit consent, offering stronger data privacy protection for users within these regions.
We still looking for awesome people to join us! 😉
Jobs:Software Engineer (m/f/d) —> hereContent Marketing Manager (m/f/d) —> hereStudent Assistant – Marketing (m/f/d) —> herePlease, If you know someone who is looking for a job with purpose and wants to help push a young startup forward, reach out to jobs@nenna.ai
California Bill recognizes personal information in AI Models
Summary: The California Senate has passed Bill 1008, an amendment to the California Consumer Privacy Act (CCPA) of 2018, which extends the definition of personal information (PI) to include data within artificial intelligence systems. If approved, this bill will require developers of AI models to identify, manage, and potentially erase personal information embedded in their models to comply with privacy regulations.Why is it important: This bill could have significant implications for AI developers, as it acknowledges that personal information can exist in abstract digital formats like AI systems. It will create new compliance challenges, including identifying and erasing personal information within AI models and ensuring the accuracy of responses about individuals. This move enhances privacy protections for consumers, ensuring AI systems do not misuse or misrepresent personal data.
NENNA.AI Team finally complete and ready for AI Enablement
We are excited to welcome Lars Moll and Max Flöttmann as the new CEO and CTO of NENNA.AI. Lars, who starts mid-October, will lead the areas of marketing, sales, and finance, bringing over 15 years of digital management experience. Max expert in software development, AI, and data-driven business models, will be responsible for leading NENNA’s technological strategy and overseeing AI initiatives. This appointment marks a significant step in our company’s growth phase, joining forces with our Founder Alexander Siebert and Head of Business Development Florian Spengler. Both support our mission to provide companies with secure and compliant AI solutions, with their extensive management expertise and strong network.
If you also want to work with us, join us at NENNA.AI or view the job postings above and be a part of the future of secure AI!! 🙌
ChatGPT and other AI Tools are collecting more personal data than you might realize
Summary: The Atlantic reports that, AI tools like ChatGPT are increasingly being used by individuals to share intimate details of their lives, often without fully understanding the potential privacy risks. While AI companies promise to keep conversations secure, these chat logs are stored, used for model training, and may eventually contribute to targeted advertising. The vast amount of personal data users willingly provide to chatbots presents new privacy concerns, particularly as AI technology continues to advance.
Details:
Data Collection: Chatbots like ChatGPT collect vast amounts of personal data from users during conversations, which are stored and used to improve AI models.
Privacy Risks: Despite promises of security, AI systems have experienced breaches, raising concerns about the safety of personal data and its potential use in legal cases.
Targeted Advertising: AI companies may eventually monetize the data collected from chat logs by using it to deliver highly personalized ads, similar to how search engines like Google operate.
Why is it important: The rapid adoption of AI chatbots opens up significant privacy risks as users divulge sensitive information without fully realizing the implications. With chat logs being stored and potentially used for profit, there is growing concern about how much personal data AI companies will collect and how it may be exploited in the future.
What else was important:
Microsoft: Microsoft has introduced the second wave of AI features for its 365 Copilot, enhancing productivity tools like Word, Excel, and Outlook. The new features aim to streamline workflows by offering more advanced AI-powered capabilities, such as automatic data analysis in Excel, content generation in Word, and improved email management in Outlook.
GDPR: The German Federal Data Protection Commissioner is considering banning the use of TikTok on government devices, including those of Chancellor Olaf Scholz, due to concerns over data privacy. The commissioner has also suggested a potential ban on using ChatGPT on official devices, citing similar privacy and data protection issues.
Meta: Meta has confirmed that it is using publicly available data from Facebook and Instagram, including photos and posts, to train its AI models. This includes content shared on its platforms, sparking concerns about user privacy and how personal data is being utilized to enhance Meta’s AI capabilities. The move highlights the ongoing debate over data usage and transparency in AI development.