Pravjit Tiwana
Greater Seattle Area
6K followers
500+ connections
View mutual connections with Pravjit
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Pravjit
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
About
C-Level technology executive with a multi-domain background (cloud infrastructure…
Experience
View Pravjit’s full profile
Other similar profiles
-
Bob Bieger
Minneapolis, MNConnect -
Anthony Victor Lorenzo
Software Engineer
Sebring, FLConnect -
Anton Semenovsky
San Francisco, CAConnect -
Siva Padisetty
Bellevue, WAConnect -
Steven Scott
Moorpark, CAConnect -
Brent Wood
Greater Seattle AreaConnect -
Roger Barga
Seattle, WAConnect -
Vijay Rayapati
BengaluruConnect -
Daniel Hernandez
Austin, TXConnect -
Paul Dakessian
Greater Seattle AreaConnect -
Claude Jones
Encinitas, CAConnect -
Kumar Chellapilla
Mountain View, CAConnect -
Sharad Bajaj
Greater Seattle AreaConnect -
Vikas Dhaka
San Jose, CAConnect -
Eric Musser
Peachtree Corners, GAConnect -
Hai Huang
Bellevue, WAConnect -
James Aslan
Freelance IT and Business Operations Consultant
Bayonne, NJConnect -
Andrew Fair
Greater IndianapolisConnect
Explore more posts
-
Jignesh Shah
Amazon RDS Multi-AZ deployment with two readable standbys supports 6 additional AWS Regions Three-node Multi-AZ DB Cluster is now available in Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Europe (Spain), Europe (Zurich), Middle East (UAE), and Israel (Tel Aviv) Regions. This deployment option supports minor version upgrades and system maintenance updates with typically less than one second of downtime when using Amazon RDS Proxy or any one of the open-source AWS Advanced JDBC Driver, PgBouncer, or ProxySQL. https://lnkd.in/g9D9RGVB #Amazon #RDS #PostgreSQL #MySQL
90
1 Comment -
Carmella (Surdyk) Weatherill
Newly published in Google Cloud's Architecture Framework 🏗️ this new AI/ML perspective provides a roadmap for designing, building, and managing workloads on Google Cloud. 🤖 It covers best practices for achieving operational excellence, security, reliability, cost optimization, and performance optimization. 🎯 Who should read this? 🤔 Decision-makers 👔 Architects 📐 Administrators 💻 Developers 🧑💻 Operators 🛠️ Key takeaways: Learn how to design and build AI/ML workloads that meet your business goals. 💼 Understand the principles of operational excellence, security, reliability, cost optimization, and performance optimization in the context of AI/ML. 💡 Get practical recommendations for implementing these principles on Google Cloud. ☁️ #AI #ML #GoogleCloud #CloudArchitecture #OperationalExcellence #Security #Reliability #CostOptimization #PerformanceOptimization
-
Raghu Ramakrishnan
Data marts, warehouses, lakes, whatever the name, the pattern of consolidating data across an enterprise for comprehensive analytics is a familiar one. Every enterprise has many data sources, some operational, some analytics, some telemetry, and bringing them together has always been a challenge. Cleaning the data, curating for consistency, exploring, doing data science, building reports, monitoring, is another series of challenges in bringing a range of tools to bear on the data, thanks to diverse data formats and tooling. With Gen AI, the requirements (e.g., vector indexing for RAG) and potential value of the data have both gone up. And the pressure to ensure right use of the data, to comply to a broad range of regulations, to secure the data has never been higher. We believe the time has come to come together around open data standards to ensure that data is easy to consolidate and easy to plug into a wide variety of tools, all in a properly governed way. Josh and I describe the approach we are taking at Microsoft with Fabric and Purview. We are building on open Parquet-based updatable table formats such as Delta, Iceberg and Hudi, and interop across them with XTable. We are making it easy to consolidate data with mechanisms like mirroring and shortcuts. Ultimately, users should be able to use their tool of choice for a given task with the least friction.
247
11 Comments -
Carmella (Surdyk) Weatherill
The latest episode of Google Distributed Cloud’s “AI anywhere” series is now live, and it's a must-watch for anyone interested in the future of manufacturing! In this video, Vicky Leung, Cloud Space Program Manager, takes us inside a Cloud Space demo of a modern factory to showcase how Google Distributed Cloud is addressing some of the biggest challenges in the industry. Key takeaways from the video: - Real-world AI applications. See how AI-powered cameras and Google Distributed Cloud are being used to ensure worker safety and optimize processes. - Overcoming deployment challenges. Learn how Google Distributed Cloud makes it easier to deploy AI in factories, even with limited internet connectivity and massive amounts of data. - The future of AI in manufacturing. Get a glimpse into how AI is transforming the factory floor and creating a safer, more efficient work environment. Watch the video now and visit our website to learn more about Google Distributed Cloud → https://goo.gle/4aUy5sM
-
Carmella (Surdyk) Weatherill
The latest episode of Google Distributed Cloud’s “AI anywhere” series is now live, and it's a must-watch for anyone interested in the future of manufacturing! In this video, Vicky Leung, Cloud Space Program Manager, takes us inside a Cloud Space demo of a modern factory to showcase how Google Distributed Cloud is addressing some of the biggest challenges in the industry. Key takeaways from the video: - Real-world AI applications. See how AI-powered cameras and Google Distributed Cloud are being used to ensure worker safety and optimize processes. - Overcoming deployment challenges. Learn how Google Distributed Cloud makes it easier to deploy AI in factories, even with limited internet connectivity and massive amounts of data. - The future of AI in manufacturing. Get a glimpse into how AI is transforming the factory floor and creating a safer, more efficient work environment. Watch the video now and visit our website to learn more about Google Distributed Cloud → https://goo.gle/4aUy5sM
-
Mahesh Agrawal
Want to learn more about how Oracle is helping bring AI to its customers? In this episode Host Karissa Breen sits down with Oracle’s Ashish Ray, vice president, product management, Oracle database and Sunil Wahi, vice president, Fusion Cloud Applications for more perspective into how Oracle is helping its customers to solve business challenges as well as the potential to tap into AI.
-
Andrew Warfield
Sometimes small changes are a big deal. S3's per-account bucket limit has meant that buckets have been more of a human provisioning thing, starkly different from objects, which tend to be interacted with primarily from code. For years, customers have been asking us to increase this limit -- and in the storage innovation talk at re:Invent this year, i'd tell a bit of a story about why it was a more complicated bit of work than you might expect. With this limit increase, buckets have become a much more programmatic construct. We hear customers talking about using buckets, with these limit increases, as a way of managing data sharing with their own customers and using them to more effectively isolate data for specific jobs and teams. If larger numbers of buckets opens up a new pattern for you, I'd love to hear more about it. These launches, like object consistency, S3 conditionals, and big bucket limit increases have a tendency to fly a little below radar. But they are the ones that I actually hear the most excitement back from developers about, because they really broaden S3 as a programmatic interface to your data, and a building block for really incredible systems. I'm so impressed with the hard and careful work on the team that went in to getting this one out. https://lnkd.in/gA2cWATM
245
8 Comments -
Byron Cook
Folks I've spoken to about Amazon have likely heard me talk about the importance of the PR/FAQ process, a.k.a. "the working backwards process". This blog posts gives a very nice example of a PR/FAQ. https://lnkd.in/edTeuVRQ. Note that the blog doesn't capture the presumably 100s of reviews/discussions/rewrites that this doc represents, which is a part of the real value. When the process is done it all looks so simple, but getting a large organization to make a decision to something simple and do it in a BIG way is the challenge the fun of it all.
64
3 Comments -
Lionel Touati
BigQuery continuous queries enable real-time data processing directly within the data warehouse, while Confluent Cloud efficiently captures and delivers these changes to downstream applications—even when data enrichment or transformation is needed in flight. This integration creates a seamless data pipeline for real-time analytics, fraud detection, and other time-sensitive use cases.
-
Samir Karande
Amazon Web Services (AWS) rolled out v1.0 of cost data export compliant with FinOps Foundation FOCUS specifications. zOpt.ai is a contributor to #FinOps #FOCUS specifications and I am glad to see the progress and adoption of the standard specifications by all the major cloud providers. This is a major step in understanding cloud spend across cloud providers in uniform manner.
23
-
Justin Brodley
New Episode: 268: Long Time Show Host is CloudPod’s first Casualty to AI (For This Week, at Least) https://bit.ly/4bURA59 Episode 268 of The Cloud Pod podcast, bringing you all the latest news from Cloud and AI with AWS, Azure, Google, OCI, OpenSSH, CloudFlare, and Lambda #aws #gcp #azure #cloud #cloudpodcast #podcast
5
1 Comment -
Justin Brodley
New Episode: 260: Amazon Dispatches AWS CEO Adam Selipsky with Prime 2-day delivery Episode 260 of The Cloud Pod podcast, bringing you all the latest news from Cloud and AI including AWS, Azure, Google, OCI, GPT-4.o, C7i-flex, AWS CEO Changes and Amazon Polly #aws #gcp #azure #cloud #cloudpodcast #podcast
18
-
Werner Vogels
A core tenet for #AWS has always been that, if it was possible to take care of scaling, resilience, availability, security, and cost management of our customer's infrastructure components, we should do so. Eventually, this became known as "serverless", which encompasses much more than executing code without the need for servers. Amazon SQS and S3 were the first AWS services launched, and they were "serverless" from day one. A good example of this philosophy in practice is Aurora Serverless, which provides on-demand auto-scaling for Amazon Aurora databases. It proactively scales up resources during peak periods to ensure the predictable performance when you require it. When demand subsides, it seamlessly scales back down, reducing waste. And with granular, second-by-second billing, you can truly optimize for cost. Remember, the essence of frugal architecture is robust monitoring driving the ability to optimize costs. To dive deeper into how Aurora Serverless exemplifies this principle, I recommend reading the latest paper here: https://lnkd.in/e4uPK6wv #AWS #serverless #databases
1,024
16 Comments -
Ravi Murthy
Citius ! Loyal Guru got 2x faster with AlloyDB for PostgreSQL. "Prior to adopting AlloyDB, large traffic spikes from major retail events on Loyal Guru often created bottlenecks. We struggled to deal with data growth, which strained the limits of our storage capacity. AlloyDB's autopilot feature for managing storage has been instrumental in mitigating these issues, automatically adjusting storage capacity to meet our demands without manual intervention. Overall, we reduced query latency by 40-50% and optimized storage from 2TB to 1.3TB — a 35% improvement." #alloydb #gcp #postgresql
65
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More