Newsletter
NENNA.AI monthly newsletter - August
30.08.2024
TODAY'S MENU
Microsoft Service Agreement
Jobs @ nenna
California's AI bill
Nenna: IP Protection
Data Security: Copilot
What else was important
Microsoft Updates Service Agreement in September
Summary: Microsoft has updated its Services Agreement, warning users that its AI-generated content and other services might be used to enhance its own products. This update has sparked concerns about how Microsoft will handle user data, particularly with AI tools that could potentially utilize customer content without explicit consent.
Why is it important: This update underscores the need for users and organizations to closely monitor how their data is being used by service providers, especially in the context of AI. As Microsoft reserves the right to use AI-generated content, there is growing concern about privacy and the potential for customer data to be leveraged in ways that may not align with users’ expectations. Ensuring transparency and robust data governance is crucial to maintaining trust and safeguarding sensitive information.
We are currently looking for awesome people to join us!
Jobs:
Software Engineer (m/f/d) —> here
Content Marketing Manager (m/f/d) —> here
Student Assistant – Marketing (m/f/d) —> here
Please, If you know someone who is looking for a job with purpose and wants to help push a young startup forward, reach out to jobs@nenna.ai
California weakens AI Safety Bill
Summary: California has diluted a proposed bill designed to prevent AI-related disasters, taking advice from AI safety company Anthropic just before the final vote. The adjustments to the bill reduce its scope, potentially limiting the effectiveness of the proposed regulations intended to mitigate risks associated with advanced AI systems.
Details:
Scope Reduction: The bill’s original provisions, which aimed to impose stringent safety measures on AI developers, have been significantly weakened, narrowing the range of AI systems that would be subject to the regulations.
Influence of Anthropic: The changes were made after receiving feedback from Anthropic, a company specializing in AI safety, which suggested that the bill’s original scope was too broad and could stifle innovation.
Legislative Compromise: The revised bill now focuses more on voluntary guidelines rather than mandatory regulations, raising concerns about its ability to effectively prevent AI disasters.
Why is it important: The weakening of this bill highlights the significant influence Silicon Valley companies, like Anthropic, have on shaping AI legislation. This raises concerns about the ability of regulatory frameworks to adequately protect the public from the risks posed by advanced AI systems.
Using AI in enterprises and protecting Intellectual Property with Nenna
Nenna’s IP protection service safeguards company’s intellectual property when using AI technologies. Our solution ensures that sensitive data and proprietary information are securely managed, preventing unauthorized access and protecting innovations throughout the AI integration process.
Join us at nenna.ai and be a part of the future of secure AI!! 🙌
Security concerns over Microsofts Copilot
Summary: A recent analysis @Black Hat security conference reveals significant security vulnerabilities in Microsoft’s Copilot, which could be exploited for phishing attacks and unauthorized data extraction. Copilot’s integration into Microsoft 365 tools may inadvertently expose sensitive information, making it a target for cybercriminals.
Details:
Phishing risks: Copilot’s ability to generate realistic, context-specific content could be misused to create convincing phishing emails, increasing the risk of successful attacks.
Data extraction vulnerabilities: The integration of Copilot with Microsoft 365 allows for the potential extraction of sensitive data from emails, documents, and other files, which could be exploited by malicious actors.
Lack of security safeguards: The report underscores the need for stronger security measures and user awareness to mitigate the risks associated with the use of AI tools like Copilot in business environments.
Why is it important: As AI tools become more integrated into everyday workflows, ensuring robust security mechanisms is critical to safeguarding sensitive information from potential exploitation.
What else was important
Zoom: Will update terms of service at 09/20/2024, which allow the company to use customer data, including video content, for AI training purposes without explicit user consent. They also auto-enable that per default.Google:
Google has lost a major antitrust case in the U.S. over its dominance in search, marking a significant legal setback for the tech giant. The ruling could lead to substantial changes in how Google operates, potentially forcing the company to alter its business practices to promote competition and reduce its control over the search market.
Shadow AI: A recent survey reveals that while 15% of companies have banned the use of AI tools for coding, 99% of developers continue to use them despite these restrictions. This highlights the growing reliance on AI-assisted coding tools in the software development industry, even in the face of organizational policies prohibiting their use.