The cost of training top models is increasing 2-3X per year, and at this rate will reach $1 billion per model by 2027. The findings are from Epoch AI and follow their recent analysis of increases in compute for foundation models running at 4-5X per year. They've also provided some interesting data on the relative breakdown of cost components like staff, hardware, and energy. A few thoughts after reading this: - I would assume staff costs won't increase as much as hardware and energy costs, so the relative contribution of human labor to future AI models will decrease, in yet another sign of capital's increasing dominance over labor in the industry (just look at NVIDIA's ever-rising stock price). - If compute is increasing 4-5X per year but cost is increasing 2-3X, is it safe to assume that training runs are getting more efficient? - Some people, like Anthropic CEO Dario Amodei, have said we'll see $1 billion training runs in the next year. Does this mean the 2-3X pace is about to increase? That Epoch is wrong in its estimate? Or that Amodei is overstating the cost, possibly to put fear into competitors who can't keep pace? - Only a few companies are going to be able to train future models. We're likely going to see further consolidation. Perhaps we get to a future with only two large general purpose models, like the Windows/MacOS and Android/iOS duopolies we've seen in the past. Article here: https://lnkd.in/g5SejXen
Simon Smith’s Post
More Relevant Posts
-
Let's explore how to make the most out of Gemma models in your next project. Gemma models are a suite of open-source tools tailored for specific tasks, and they're making waves thanks to their efficiency and versatility. These models have been refined by tech giants, which means they're top-notch for various applications. Take CodeGemma, for example. It's perfect for anyone working on coding projects. Available in 2B and 7B pretrained options, it excels at code completion, helping you work faster with fewer mistakes. Then there's RecurrentGemma, which is all about speed when dealing with long sequences of data. It uses a technique called recurrence to process information quickly and accurately. Here are some tips I've found helpful: 1. **Identify What You Need**: Choose the right tool for the job. CodeGemma is ideal for coding tasks while RecurrentGemma shines with long data sequences. 2. **Start with Pretrained Models**: Save time and resources by using variants that have already learned from vast amounts of data. 3. **Optimize Performance**: For those using NVIDIA RTX GPUs, optimized checkpoints like TensorRT-LLM can supercharge CodeGemma's performance without bogging down your system. 4. **Keep Learning**: The field is always advancing, so staying informed about new releases ensures you don't miss out on improvements that could benefit your work. In essence, Gemma models are here to elevate your AI-related endeavors. They offer solutions whether you're into coding or need efficient data processing tools. The trick is to understand what you need and pick the model that aligns with your goals—here's to smoother workflows! #AI #MachineLearning #DataScience
To view or add a comment, sign in
-
2e29 FLOP training runs 2023 is thought to be feasible by 2023. Primary constraints include power availability, chip manufacturing capacity, data scarcity and latency. "Overall, these constraints still permit AI labs to scale at 4x/year through this decade, but these present major challenges that will need to be addressed to continue progress." Specifically on data scarcity - I have said the same about authoritative sources a number of times: "it’s worth noting that copyright restrictions may disproportionately affect high-quality sources such as books and reputable news outlets. These sources often contain well-curated, factual, and professionally edited content, which could be particularly valuable for training. As a result, while the quantity of training data may not be drastically reduced, there could be a notable impact on the quality and diversity of the most authoritative sources available for AI training." https://lnkd.in/ghWt2SNi
Can AI Scaling Continue Through 2030?
epochai.org
To view or add a comment, sign in
-
Don’t let anyone tell you a giant AI model can’t fit on your laptop. Why Are Models So Big in the first place? ↳ Trained on massive data: LLMs learn from books, code, to the entire internet. ↳ Weights take up space: 10B-parameter model uses ~40 GB (each weight is a 32-bit number). ↳ Activations add up: Intermediate outputs during inference/training consume memory too. The Fix? Quantization Quantization shrinks models by reducing floating point precision of weights. Eg: 32bit→8bit (Precision ~1e⁻⁷ → ~1e⁻⁴) Why It Works? ↳ Smaller memory footprint: 10B model drops to ~20 GB (16bit), 10 GB (8bit) ↳ Keep ~90% accuracy: Near-original performance with most methods. ↳ Speed boost: Smaller files = faster downloads; low-bit math = quicker inference. How Quantization Works (Simple Math!) 1. Pick a range: For INT8, use –128 to 127. 2. Scale & shift: Stretch/squeeze FP32 weights into this range using a scale factor and zero point. 3. Dequantize if needed: Temporarily revert to FP32 for precise calculations during training steps. Inference vs Training: When and How to Quantize 1. Inference (Optimize Post-Training) ↳ Post-Training Quantization: Convert trained models to lower precision (FP32→INT8) in 1-5 code lines. Trade-off: ~1-5% accuracy loss. ↳ Mixed-Precision: Dynamically assign bit-widths (Apple’s 2/4-bit blends, NVIDIA’s 4-bit GPUs) for edge efficiency. 2. Training (Build Robust Models) ↳ Quantization-Aware Training: Simulate low precision during training to minimize deployment drops. Trade-off: adds 20-30% training time. ↳ Full Low-Precision Training: Train entirely in 8-bit. Quantization Flavours: Key Techniques 1. Asymmetric vs. Symmetric ↳ Asymmetric: Uses a "zero point" to align ranges perfectly (no wasted space). ↳ Symmetric: Simpler but struggles if values skew positive/negative. 2. Per-Channel/Group Scaling ↳ Assign unique scales to layers/blocks to handle outliers. 3. Top Methods ↳ LLM.INT8: Isolate outliers in 16-bit; quantize the rest to 8-bit. ↳ SmoothQuant: Balance activation spikes before quantizing. ↳ QLoRA: 4-bit base weights + tiny LoRA layers for fine-tuning. ↳ BitNet b1.58: Microsoft’s 1.58-bit/weight breakthrough (matches 16-bit Llama 2!). Why You’ll Love It? ↳ Run 70B models on a single GPU ↳ 2-4x faster inference (low-bit math = quicker responses) ↳ Cut cloud costs by 80%+ (smaller models = fewer GPUs) Get Started in 4 Steps ↳ Start with 8-bit (safe) → experiment with 4-bit later. ↳ Use libraries: Hugging Face Transformers, PyTorch, TensorFlow Lite, Quanto, Axolotl. ↳ Calibrate: Some methods (like GPTQ) need a tiny dataset to optimize scaling. ↳ Tweak: Try mixed precision or advanced methods (SmoothQuant) if accuracy drops. Final Take Quantization isn’t a compromise. Shrink models 2-8x, speed up responses, and democratize AI—all while keeping ~90% accuracy. Your laptop (or budget) just became LLM-ready. Which quantization method have you tried? Let’s talk in the comments!
To view or add a comment, sign in
-
🚀With Microsoft's Phi-3 recently coming out, I wanted to share the highlights in its technical paper: 1. Phi-3-mini Model Compact, 3.8 billion parameter model trained on 3.3 trillion tokens, deployable on mobile phones. 2. Performance Comparable to larger models like Mixtral 8x7B and GPT-3.5, achieving 69% on MMLU and 8.38 on MT-bench. 3. Training Data Utilizes a mix of heavily filtered web data and synthetic data, enhancing smaller model performance. 4. Additional Models Introduced phi-3-small (7B parameters) and phi-3-medium (14B parameters) with even better performance metrics. 5. Architecture and Compatibility Phi-3-mini uses a transformer decoder architecture, compatible with Llama-2 packages. 6. Quantization and Deployment The model can be quantized to 4-bits, fitting in 1.8GB of memory, tested on iPhone 14. 7. Training Methodology Focuses on quality of data rather than quantity, aligning with 'Textbooks Are All You Need'. 8. Post-Training Enhancements Includes supervised finetuning and direct preference optimization for various improvements. 9. Academic Benchmarks Detailed performance comparisons on benchmarks like MMLU, Hellaswag, and others. 10. Safety Measures Developed with Microsoft's RAI principles, undergoing extensive testing and evaluations for safety. -------------------------- This approach underlines a shift towards quality-centric training in AI development, allowing powerful capabilities on constrained devices. #AI #GenerativeAI #Tech
To view or add a comment, sign in
-
I have gone through the first half of the technical specs of Gemma 2 and was impressed with the new benchmarks in model efficiency, performance, and training methodologies. Summarizing the key points from the report for your quick reference. Built on a foundation of extensive training data and sophisticated infrastructure, Gemma 2 offers unparalleled quality and speed for both intensive computations and streamlined deployments across various hardware environments. 💡✨ 🔧 𝗧𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗗𝗲𝘁𝗮𝗶𝗹𝘀 𝗠𝗼𝗱𝗲𝗹 𝗦𝗶𝘇𝗲𝘀: - 9 billion parameters (base and instruction-tuned versions) - 27 billion parameters (base and instruction-tuned versions) 📊 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗗𝗮𝘁𝗮: - 27B model: 13 trillion tokens - 9B model: 8 trillion tokens 📏 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗟𝗲𝗻𝗴𝘁𝗵: Both models have a context length of 8192 tokens. 🖥️ 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲: - 27B model: Trained on Google Cloud TPU v5p - 9B model: Trained on TPU v4 ⚙️ 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸: Utilized JAX and ML Pathways. 🌟 𝗞𝗲𝘆 𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗺𝗲𝗻𝘁𝘀 🔄 𝗦𝗹𝗶𝗱𝗶𝗻𝗴 𝗪𝗶𝗻𝗱𝗼𝘄 𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻: Combines sliding window and full-quadratic attention for improved generation quality and efficiency in handling longer input sequences. 📉 𝗟𝗼𝗴𝗶𝘁 𝗦𝗼𝗳𝘁-𝗖𝗮𝗽𝗽𝗶𝗻𝗴: Enhances training stability by preventing logits (raw, unnormalized scores or outputs of a model) from growing excessively by scaling them to a fixed range. 📚 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗗𝗶𝘀𝘁𝗶𝗹𝗹𝗮𝘁𝗶𝗼𝗻: Employs a larger teacher model to train the smaller 9B model, enhancing its performance. 🧩 𝗠𝗼𝗱𝗲𝗹 𝗠𝗲𝗿𝗴𝗶𝗻𝗴: Integrates multiple large language models (LLMs) into a single new model for superior performance. 🚀 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲: 🔋 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆: The 27B model is designed to run efficiently at full precision on a single Google Cloud TPU host, NVIDIA A100 80GB Tensor Core GPU, or NVIDIA H100 Tensor Core GPU, significantly reducing deployment costs. ⏩ 𝗜𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗦𝗽𝗲𝗲𝗱: Optimized for high-speed performance across various hardware setups, from gaming laptops to cloud-based environments - 27B model: Offers competitive performance compared to models more than twice its size. - 9B model: Outperforms other models in its size category, including Llama 3 8B. With its impressive enhancements and cutting-edge performance, Gemma 2 is poised to be a formidable contender in the AI domain, outstripping many competitors with its efficient architecture and superior inference speeds. Whether deployed on a single Google Cloud TPU or a gaming laptop, Gemma 2 delivers robust performance that scales across diverse applications. Experience the future of AI with Gemma 2, where innovation meets excellence! 🌟🚀 #Gemma2 #AI #MachineLearning #DeepLearning #NaturalLanguageProcessing #AIResearch #TechInnovation #ModelPerformance #AIEnhancements #TPUv5 #TPUv4 #KnowledgeDistillation #ModelMerging #InferenceSpeed #AIModels
To view or add a comment, sign in
-
NVIDIA RELEASED OPEN SYNTHETIC DATA GENERATION PIPELINE FOR TRAINING LLMS NVIDIA announced Nemotron-4 340B, a family of open models developers can use to generate synthetic data for training LLMs for commercial applications across healthcare, finance, manufacturing, retail, and other industries. Robust datasets with high-quality training data are prohibitively expensive and difficult to access. Synthetic data mimics the characteristics of real-world data. Through a uniquely permissive open model license, Nemotron-4 340B gives developers a free, scalable way to generate synthetic data that can help build powerful LLMs. The Nemotron-4 340B family includes base, instruct, and reward models that form a pipeline to generate synthetic data for training and refining LLMs. The models are optimized with NVIDIA NeMo, an open-source framework for end-to-end model training, including data curation, customization, and evaluation. They’re also optimized for inference with the open-source NVIDIA TensorRT-LLM library. Nemotron-4 340B can be downloaded from Hugging Face. https://lnkd.in/dc6Xr7q9 / FROM OUR AI CTO Miguel Amigot II
To view or add a comment, sign in
-
WARNING: Your Competition Is About to Leave You in the Dust... NVIDIA just released 5 FREE online courses that will give you the AI skills to DOMINATE your industry... but you must act fast! Here are the top 5 FREE online courses you need to check out: 1. Accelerate Data Science Workflows with Zero Code Changes: Explore the benefits of a unified workflow across CPUs and GPUs for data science tasks. https://lnkd.in/ePPP9yqV 2. Generative AI Explained: Dive into Generative AI concepts, applications, challenges, and opportunities. https://lnkd.in/eHNzFFs2 3. Building Video AI Applications at the Edge on Jetson Nano: Learn to set up your Jetson Nano, build DeepStream applications for video analysis, and develop skills for future projects. https://lnkd.in/edqBVK-V 4. An Even Easier Introduction to CUDA: Discover how to organize parallel thread execution for massive dataset sizes and manage memory between the CPU and GPU. https://lnkd.in/efxZ85Jk 5. Building RAG Agents with LLMs: Master scalable deployment strategies for LLMs and vector databases, microservices, and state-of-the-art models. https://lnkd.in/er2jyAKt These courses are tailored to help you stay ahead of the curve in the AI game. Plus, they're FREE! Dive in and start learning today! Image credit: Findtek
To view or add a comment, sign in
-
-
Alternatives to standard training methods for LLMs will change the game I won't even write about why training large networks is important. I think there are about 4-5 posts in everyone's feed leading to this idea. However, training modern networks comes with nuances. It cost $5 million (3,640 petaflop/s-days) to train GPT-3 with 175 billion parameters. It cost over $100 million to train GPT-4 with more than 1.76 trillion parameters. Rumors are that GPT-5 will have 10-20 trillion parameters. We don't know how much money OpenAI will spend on it, but I think it will be more than $2 billion. Currently, $20 trillion of money supply is circulating in the world. If everything continues at the same pace, companies might already spend over $100 billion collectively this year. That's about 0.5 percent of the world's circulating money supply (!!). And this is just the beginning The industry might change significantly with the emergence and spread of new classes of models. Example with Intel. In 2019, Intel's revenue was $72 billion, and NVIDIA's was $12 billion. In 2023, Intel's revenue was $54 billion, and NVIDIA's was $27 billion. Why is this happening? Because the world is changing fast, and there is now a big demand for GPUs and TPUs, and NVIDIA was able to adapt faster. And also start making their own AI solutions on top. Overall, it's likely that the same thing will happen with NVIDIA's growth, and a company will overtake it, but NVIDIA iterates quickly and creates alternative technologies. It's funny that Intel also outpaced many competitors in the 90s, including Motorola. History repeats itself. The problem with current approach is that weight adjustment involves matrix multiplication, and the number of calculations grows quadratically with the increase in parameters (this is in the model case) A lot of resources are now spent on optimizations but the trend remains - the next generation networks mainly get smarter by increasing parameters tenfold, which leads to a twentyfold increase in money for training What are the alternatives Hinton, who wrote the famous article (have you seen another article with 16k citations) about backprop in 1986, actively criticizes his own method in 2024. He proposes capsule networks, which I even analyzed here before. It doesn't work yet, but it's very interesting. Globally, innovations can be on three levels - algorithmic (we make innovations in the method of training like capsules), hardware - we design new ways of computation (for example, BrainChip — with the neuromorphic processor Akida, which imitates the human brain and can compute networks on the device), service (companies that help manage models, like Bright Computing which NVIDIA bought in 2022), and even physical (I recently met a professor of theoretical physics from Cambridge, who is making a very interesting method at the level of physical processes to restructure training). Like this post if you want to know more about alternatives
To view or add a comment, sign in
-
-
🚀 **Navigating the New Frontier: AI in Education** 🎓 Hey LinkedIn friends! 🌟 Ever wondered about the fine line between using AI as a helpful study buddy and crossing into the realm of academic dishonesty? 🤔 **📚 Check out FemLogic AI's latest blog post!** We dive into a thought-provoking case where a high school student faced severe penalties for using AI to complete an assignment. This sparked a legal battle that highlights crucial questions about the role of technology in education. In our article, you'll discover: ✅ The incident that set off this legal firestorm ✅ Parents' brave stand against ambiguous school policies ✅ Key implications for students, educators, and policymakers nationwide Whether you're an educator, a student, or someone interested in the future of learning, this case is a wake-up call to rethink our policies surrounding AI's use in schools. 🤖✨ We discuss: - **Understanding AI's Role:** What constitutes assistance vs. cheating? - **Policy Development:** Why clarity and communication on AI usage are essential - **Educational Benefits of AI:** Exploring how AI can enhance creativity and learning Join us as we navigate this pivotal crossroads in education. Let’s embrace innovation while ensuring academic integrity! 👉 [Read the full article here!](#) Your thoughts and comments are always welcome! Let’s start a dialogue and shape the future of education together. 💬❤️ #AIinEducation #FemLogicAI #EducationalPolicy #Innovation #Learning #AcademicIntegrity #LegalIssues #FutureOfEducation
Student Penalized for AI Use Leads to Legal Battle
femlogic-ai.blogspot.com
To view or add a comment, sign in
-
𝐄𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞𝐬, 𝐄𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐈𝐧𝐬𝐭𝐢𝐭𝐮𝐭𝐞𝐬 & 𝐆𝐨𝐯𝐞𝐫𝐧𝐦𝐞𝐧𝐭𝐬 𝐚𝐫𝐞 𝐩𝐚𝐲𝐢𝐧𝐠 💵 𝟏𝟎𝟎𝟎'𝐬 𝐨𝐟 𝐃𝐨𝐥𝐥𝐚𝐫𝐬 𝐭𝐨 𝐋𝐞𝐚𝐫𝐧 - 🅷OW TO 🅴MBARK ON 🅰🅸 💡 🅹OURNEY? 𝐇𝐞𝐫𝐞’𝐬 𝐚 𝐆𝐮𝐢𝐝𝐞 𝐭𝐨 𝐄𝐬𝐬𝐞𝐧𝐭𝐢𝐚𝐥 𝐇𝐚𝐫𝐝𝐰𝐚𝐫𝐞 𝐟𝐨𝐫 𝐲𝐨𝐮𝐫 𝐀𝐈 𝐏𝐫𝐨𝐣𝐞𝐜𝐭𝐬 🚀 🅣HE WORLD OF 🅐🅘 DEMANDS POWERFUL HARDWARE TO SUPPORT HEAVY COMPUTATIONS, WHETHER YOU’RE TRAINING MACHINE LEARNING MODELS OR BUILDING NEURAL NETWORKS. 𝘏𝘦𝘳𝘦’𝘴 𝘢 𝘴𝘵𝘳𝘶𝘤𝘵𝘶𝘳𝘦𝘥 𝘣𝘳𝘦𝘢𝘬𝘥𝘰𝘸𝘯 𝘰𝘧 𝘵𝘩𝘦 𝘣𝘦𝘴𝘵 𝘩𝘢𝘳𝘥𝘸𝘢𝘳𝘦 𝘧𝘰𝘳 𝘈𝘐, 𝘢𝘭𝘰𝘯𝘨 𝘸𝘪𝘵𝘩 𝘣𝘳𝘢𝘯𝘥𝘴, 𝘴𝘱𝘦𝘤𝘪𝘧𝘪𝘤𝘢𝘵𝘪𝘰𝘯𝘴, 𝘦𝘴𝘵𝘪𝘮𝘢𝘵𝘦𝘥 𝘤𝘰𝘴𝘵𝘴, 𝘢𝘯𝘥 𝘢𝘷𝘢𝘪𝘭𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝘵𝘰 𝘩𝘦𝘭𝘱 𝘺𝘰𝘶 𝘨𝘦𝘵 𝘴𝘵𝘢𝘳𝘵𝘦𝘥. 🔧 𝐆𝐫𝐚𝐩𝐡𝐢𝐜𝐬 𝐏𝐫𝐨𝐜𝐞𝐬𝐬𝐢𝐧𝐠 𝐔𝐧𝐢𝐭𝐬 (𝐆𝐏𝐔𝐬) GPUs are at the core of AI projects, ideal for parallel processing tasks, making them essential for machine learning and deep learning. 🅽🆅🅸🅳🅸🅰 🅶E🅵ORCE 🆁🆃🆇 4090 🔹 Specs: 24GB GDDR6X, 16,384 CUDA cores 🔹 Estimated Cost: $1,600 🔹 Availability: In stock at major retailers like Amazon and Newegg. 🅽🆅🅸🅳🅸🅰 🅰100 🆃ENSOR 🅲ORE 🅶🅿🆄 🔹 Specs: 40GB/80GB HBM2 memory, designed for AI workloads 🔹 Estimated Cost: $10,000 🔹 Availability: Typically available through enterprise suppliers or NVIDIA's website. 🔧 𝐅𝐢𝐞𝐥𝐝 𝐏𝐫𝐨𝐠𝐫𝐚𝐦𝐦𝐚𝐛𝐥𝐞 𝐆𝐚𝐭𝐞 𝐀𝐫𝐫𝐚𝐲𝐬 (𝐅𝐏𝐆𝐀𝐬) For specialized, customizable hardware suited for AI inference, FPGAs are an excellent choice. [Consult for more details] 💾 𝐒𝐭𝐨𝐫𝐚𝐠𝐞 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧𝐬 Fast and reliable storage is essential for handling large datasets in AI projects. 🆂AMSUNG 970 🅴🆅🅾 🅿LUS 🆂🆂🅳 (1🆃🅱) 🔹 Specs: NVMe M.2 interface, read speeds up to 3,500 MB/s 🔹 Estimated Cost: $100 🔹 Availability: Readily available at electronics retailers. ☁️ 𝐂𝐥𝐨𝐮𝐝 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧𝐬 (𝐅𝐨𝐫 𝐅𝐥𝐞𝐱𝐢𝐛𝐢𝐥𝐢𝐭𝐲) If investing in physical hardware is daunting, consider cloud-based solutions for flexibility and scalability. 🅰🆆🆂 🅴🅲2 WITH 🅽🆅🅸🅳🅸🅰 🅶🅿🆄S 🔹 Estimated Cost: Starting at $0.90/hour for T4 GPUs 🔹 Availability: Immediate access upon account setup. 💡 Budget-Friendly GPU Options 🛠️ 🅦HETHER YOU’RE AN 🅐🅘 ENTHUSIAST OR A BUSINESS LOOKING TO INTEGRATE 🅐🅘, INVESTING IN THE RIGHT HARDWARE WILL ELEVATE YOUR PROJECTS TO NEW HEIGHTS! 𝐋𝐞𝐭’𝐬 𝐜𝐨𝐧𝐧𝐞𝐜𝐭 𝐢𝐟 𝐲𝐨𝐮 𝐡𝐚𝐯𝐞 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬 𝐨𝐫 𝐧𝐞𝐞𝐝 𝐩𝐞𝐫𝐬𝐨𝐧𝐚𝐥𝐢𝐳𝐞𝐝 𝐠𝐮𝐢𝐝𝐚𝐧𝐜𝐞 𝐨𝐧 𝐲𝐨𝐮𝐫 𝐀𝐈 𝐬𝐞𝐭𝐮𝐩! Book a consultation - https://lnkd.in/gZz-PMac #𝘈𝘐 #𝘔𝘢𝘤𝘩𝘪𝘯𝘦𝘓𝘦𝘢𝘳𝘯𝘪𝘯𝘨 #𝘋𝘦𝘦𝘱𝘓𝘦𝘢𝘳𝘯𝘪𝘯𝘨 #𝘎𝘗𝘜𝘴 #𝘈𝘐𝘏𝘢𝘳𝘥𝘸𝘢𝘳𝘦 #𝘛𝘦𝘤𝘩 #𝘕𝘝𝘐𝘋𝘐𝘈𝘎𝘗𝘜𝘴 #𝘊𝘗𝘜𝘴 #𝘊𝘭𝘰𝘶𝘥𝘈𝘐 #𝘐𝘯𝘯𝘰𝘷𝘢𝘵𝘪𝘰𝘯 #𝘋𝘪𝘨𝘪𝘵𝘢𝘭𝘛𝘳𝘢𝘯𝘴𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯 #𝘈𝘐𝘑𝘰𝘶𝘳𝘯𝘦𝘺
To view or add a comment, sign in
-