Today, in partnership with Google Cloud, we’re beta launching SynthID, a new tool for watermarking and identifying AI-generated images. It’s being released to a limited number of Vertex AI customers using Imagen, one of our latest text-to-image models that uses input text to create photorealistic im...
Why established cyber security principles are still important when developing or implementing machine learning models.
Supply chain security for ML. Contribute to google/model-transparency development by creating an account on GitHub.
In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google's AI experiment, Bard.
Well done, congratz!
Awesome, congratulations!
I've heard good things about the AWS Security Specialty certificate too. I've done a course for it which was great, though I never bothered to take the certificate (I don't feel the need for it). Have you considered it?
The Army is exploring the possibility of asking commercial companies to open up the hood of their artificial intelligence algorithms as a means of better understanding what’s in them to reduce risk and cyber vulnerabilities.
Socket is using ChatGPT to examine every npm and PyPI package for security issues.
A very interesting approach. Apparently it generates lots of results: https://twitter.com/feross/status/1672401333893365761?s=20
Researchers use the OpenSSF Scorecard to measure the security of the 50 most popular generative AI large language model projects on GitHub.
They used OpenSSF Scorecard to check the most starred AI projects on GitHub and found that many of them didn't fare well.
The article is based on the report from Rezilion. You can find the report here: https://info.rezilion.com/explaining-the-risk-exploring-the-large-language-models-open-source-security-landscape (any email name works, you'll get access to the report without email verification)
All of these might not work as well anymore, but they're still interesting to take a look at.
Scott (Piper)’s AWS Security Maturity Roadmap is the definitive resource for cloud-native companies to build a security program and posture in AWS. It d…
This gives a great overview of when to build, buy, or adopt an open source solution for a few different common cloud security challenges.
The talk can be seen here: https://youtu.be/JCphc30kFSw?t=2140
A Comprehensive Overview of Prompt Engineering
As they mention in the thread, this isn't exactly groundbreaking but it's still interesting.
Guidance on designing, creating, testing, and procuring secure and privacy-preserving AI systems
Our goal is to facilitate the development of AI-powered cybersecurity capabilities for defenders through grants and other support.
> Strong preference will be given to practical applications of AI in defensive cybersecurity (tools, methods, processes). We will grant in increments of $10,000 USD from a fund of $1M USD, in the form of API credits, direct funding and/or equivalents.
I think this is a great initiative and I hope we'll see some cool projects to benefit defenders.