Attacking LLM – Prompt Injection

How will the easy access to powerful APIs like GPT-4 affect the future of IT security? Keep in mind LLMs are new to this world and things will change fast. But I don't want to fall behind, so let's start exploring some thoughts on the security of LLMs. 

Get my font (advertisement): https://shop.liveoverflow.com

Building the Everything API: https://www.youtube.com/watch?v=M2uH6HnodlM

Injections Explained with Burgers: https://www.youtube.com/watch?v=WWJTsKaJT_g

Chapters:
00:00 - Intro
00:41 - The OpenAI API
01:20 - Injection Attacks
02:09 - Prevent Injections with Escaping
03:14 - How do Injections Affect LLMs?
06:02 - How LLMs like ChatGPT  work
10:24 - Looking Inside LLMs
11:25 - Prevent Injections in LLMs?
12:43 - LiveOverfont ad

=[ ❤️ Support ]=

→ per Video: https://www.patreon.com/join/liveoverflow
→ per Month: https://www.youtube.com/channel/UClcE-kVhqyiHCcjYwcpfj9w/join

2nd Channel: https://www.youtube.com/LiveUnderflow

=[ 🐕 Social ]=

→ Twitter: https://twitter.com/LiveOverflow/
→ Streaming: https://twitch.tvLiveOverflow/
→ TikTok: https://www.tiktok.com/@liveoverflow_
→ Instagram: https://instagram.com/LiveOverflow/
→ Blog: https://liveoverflow.com/
→ Subreddit: https://www.reddit.com/r/LiveOverflow/
→ Facebook: https://www.facebook.com/LiveOverflow/

Leave a Reply

Your email address will not be published. Required fields are marked *