AI and ML in 2023 have experienced a resurgence that has reached even our non-tech friends and family. Some articles even talk about LLMs being as revolutionary to how we work and live as the iPhone. I don’t know; we’ll see when the newness wears off. Given the ubiquitous use of ML in recent months, the book below is an excellent introduction to the emerging risks and has a takeaway for all types of readers.
Several months ago, I read NOT WITH A BUG BUT WITH A STICKER by Siva Kumar and Anderson. It sat in my Amazon cart on pre-order for many months before the release. It turned out to be a timely release with renewed excitement for ML in security stemming from the recent LLM craze. This is not a step-by-step how-to hack ML systems; it’s a layer above detailing the risks of ML systems, considering threats and associated struggles with trusting and securing those systems.
The book takes the reader on a highly approachable exploration of potential security risks of ML systems, fixes or mitigations, governmental regulations, and much more. I enjoyed the real-world stories and references strategically placed throughout the chapters. They help provide a dose of reality and the ability to dive deeper into the ideas. The reader walks away with an appreciation for current ML systems’ nascent state of trust, safety, and security.
A technical reader will receive a foundation to build upon. The book discusses many ML security topics, arming the reader with the knowledge and direction to dive into more detailed materials like research papers from Goodfellow, Carlini, and others.
An executive reader walks away with a broad view of the difficulties of securing ML systems. It will introduce the concepts and provide real-world examples that can help solidify the risks.
Users of ML systems are introduced to how their tools might fail. Understanding these failures empowers users to exercise caution better.
Overall, it’s a fast read at 183 pages, and in my opinion, time well spent.