Pretty cool to see two AI agents recognize each other and revert to "data-over-sound" for more efficient communication. Reminds me of good old days of signing on to AOL with a dial-up modem😆 Credit: Anton Pidkuiko The project "Gibberlink" (https://lnkd.in/dpgKzbNK) demonstrated how two AI agents started a normal phone call about a hotel booking, then discovered they both are AI, and decided to switch from verbal English to a more efficient, data-over-sound protocol ggwave. Why? This protocol is much cheaper - no need for a GPU (less power) to synthesize/recognize speech and track dialogue pauses and interruptions - simple CPU process is enough to handle it all. Also it's faster and more error-proof than vocal English.
VP GTM | Lightico ( Go to Market & Revenue Operations - Marketing, Sales, Customer, Product - Strategy and Insights )
2wSo that’s what R2-D2 is saying