Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

CloudPro

64 Articles
Shreyans from Packt
08 Sep 2025
5 min read
Save for later

Batch Scoring on Azure ML

Shreyans from Packt
08 Sep 2025
5 min read
5 Knobs That Save You from Nightly HeadachesCloudPro #106Hunt Threats, Recover Fast: Next-Gen Cyber Resilience for Google CloudJoin Hunt Threats, Recover Fast: Next-Gen Cyber Resilience for Google Cloud, a virtual event about going beyond traditional backup.You'll see:- Real-time ransomware detection and automated threat hunting for Google Cloud- Turbo Threat Hunting in action to trace attack paths and accelerate incident response- Streamlined recovery workflows that simplify protecting your Google Cloud workloadsSave Your SpotToday’s CloudPro is about the five batch-scoring knobs most engineers overlook. If you’ve ever watched a job stretch from minutes to hours and wondered why, this is where you start.This article is adapted fromChapter 5 ofHands-On MLOps on Azure. In that chapter, author Banibrata De dives into the gritty details of model deployment: batch scoring, real-time services, and the YAML settings that make the difference between smooth pipelines and midnight firefights.(The book goes much further, covering CI/CD pipelines, monitoring, governance, and even LLMOps across Azure, AWS, and GCP. CloudPro readers can grab it at the end of this piece with an exclusive discount.)Cheers,Shreyans SinghEditor-in-ChiefGET THE BOOKSAVE THIS ARTICLE AND READ LATERTuning Batch Jobs on Azure ML: 5 Knobs Every Engineer Should KnowSHARE THIS ARTICLEIt’s late. The batch run you trusted starts crawling. Dashboards spike, Slack pings light up, and you’re debating whether to kill the job or ride it out. You don’t need a re-platform. You need to tune the controls Azure ML already gives you.Below are thefive knobsthat tame throughput, flakiness, and costs. They live in your batch deployment YAML, and they work.1) mini_batch_size: The throttle for your workloadBatch jobs in Azure ML process data in chunks.mini_batch_sizecontrols how big each chunk is. Push it too high, and you’ll hit memory or I/O bottlenecks; keep it too low, and you’ll waste time on overhead. Think of it like loading a truck: too few boxes and you’re underutilizing space, too many and you risk breaking the axle. Getting this balance right often cuts hours off long-running jobs.2)max_concurrency_per_instance: How many cooks in the kitchenEach compute node can process tasks in parallel, but how many at once depends on its resources.max_concurrency_per_instanceis that dial. If you pack too much onto a single node, CPU and memory will thrash, and everything slows down. Start low, then gradually raise it while watching system metrics. The goal is steady throughput, not chaos.SAVE THIS ARTICLE AND READ LATER3)instance_count: Scale out, don’t just scale upEven with tuned concurrency, sometimes one node just isn’t enough. That’s whereinstance_countcomes in. It decides how many nodes you’ll spread the workload across. It’s the knob you turn when you need predictable completion times. For example, making sure the nightly run finishes before business hours. More nodes mean more cost, but also fewer late-night surprises.4)retry_settings: Resilience for the real worldIn batch jobs, things fail: a network hiccup, a corrupted file, a transient storage timeout. Without retries, the whole job can collapse because of one small blip.retry_settingslets you say, “Try again a few times before giving up.” Set sensible timeouts and retries per mini-batch so small failures don’t derail the entire pipeline.5)error_threshold: Fail smart, not earlyWhat happens if some data records are bad? By default, too many errors can abort the run. Witherror_threshold, you control how many you’ll tolerate. Setting it to-1tells Azure ML to ignore errors completely. For messy real-world datasets, this is a lifesaver: you can still ship 99% of results and deal with the outliers later, instead of losing the entire batch.Extra sanity checksRespect the contract:Batch jobs are built forfiles/blobs in, files/blobs out. Don’t try to wrap them around per-record HTTP calls.Keep scripts separate:Usebatch_score.pyfor batch andonline_score.pyfor real-time. Different handlers, different expectations.Watch metrics that matter:Throughput, per-batch latency, error rate, and CPU/GPU/memory use. Wire alerts so you’re not caught off-guard at 2 a.m.TakeawayBatch scoring doesn’t have to be a black box. Azure ML gives you the levers. You just have to use them. Tune these five settings, keep batch and online flows separate, and you’ll get faster, more reliable runs without babysitting every night.This walkthrough is pulled straight from Chapter 5 ofHands-On MLOps on Azure. The full book expands on everything here: deployments, monitoring, alerting, governance, pipelines, and operationalizing large language models responsibly.For the next48 hours, CloudPro readers get35% off the ebookand20% off print. If Azure ML is part of your stack, or about to be, this is the reference worth keeping open on your desk.GET THE BOOK📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day! *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

Shreyans from Packt
16 Jan 2026
6 min read
Save for later

[New IT leader’s guide] Your blueprint for cloud resilience

Shreyans from Packt
16 Jan 2026
6 min read
.CloudPro #116Attackers are actively trying to keep you from recoveringIt’s a hard truth, but recent intelligence confirms that cloud-native backup is now a primary target for groups like Storm-0501.To survive these threats, you need more than just infrastructure and data durability. You need a strategy built for an active adversary, one that includes mindset, architecture, and preparation.ReadThe Four Levels of Cloud Cyber Resilience: An IT Leader’s Guideto learn:Understandwhy relying on your cloud provider’s “uptime” gives you false confidence against targeted attacksUncover the blind spotsin your current security stack that will prevent fast recovery when seconds countGet the blueprintfor upleveling your cloud cyber resilienceMake sure your cloud can survive a cyberattack.Read NowIn today's CloudPro, we'll look at self-healing infrastructure that actually works is already running in production.Grafana's taking a similar approach with AI agents that investigate incidents in 13 minutes instead of hours, which could save your team about $90k/year in senior engineering time.Meanwhile, DORA's latest research on 5,000 tech professionals figured out which AI capabilities actually separate high performers from struggling teams.We've also got the real reason your network automation keeps failing, AWS's new pentesting agent, and why the Orca-Wiz patent war finally ended.Cheers,Shreyans SinghEditor-in-Chief24 Hours Remaining: Book Your Seat NowGet 40% OffUse code FINAL40This Week in CloudNetwork automation keeps failing because your data is a messNetwork teams keep kicking off "source of truth" projects to consolidate scattered data but EMA found these are "long and painful endeavors." The blockers: execs don't get why you need $60k for a database when apps are running fine, your network data lives in spreadsheets and random IPAMs with everyone doing their own thing, and even after you build it engineers keep making CLI changes that drift everything out of sync. The fixes are obvious but hard: get exec buy-in, use discovery tools, integrate with everything, and lock down CLI access until people actually trust it more than their spreadsheets.Kubernetes 1.35 adds structured debugging endpointsK8s 1.35 enhances z-pages debugging endpoints like /statusz and /flagz with structured JSON responses instead of just plain text. Now you can programmatically query component state for automated health checks and better debugging tools without parsing text output. Still alpha and requires feature gates, but if you're building internal tooling or want to automate component validation, worth experimenting with in test environments.Google wants gRPC as an official MCP transportModel Context Protocol uses JSON-RPC but enterprises running gRPC-based services need transcoding gateways.So Google's working with the MCP community to support gRPC as a pluggable transport directly in the SDK. gRPC gives you binary encoding (10x smaller messages), full duplex streaming, built-in flow control, mTLS, and method-level authorization.MCP maintainers agreed to support pluggable transports and Google will contribute a gRPC package soon.Grafana built AI agents that investigate incidents for youGrafana's Assistant Investigations deploys specialized AI agents in parallel during incidents. They analyze metrics, logs, traces, and profiles simultaneously to build a comprehensive picture in 13 minutes instead of the 2-4 hours a human takes. Real example: payment service latency issue detected connection pool exhaustion and traced it to a recent deployment in minutes, with zero PromQL knowledge needed.Conservative estimate saves 50 hours/month of senior engineering time = $90k/year in reclaimed expertise. Free during public preview, worth trying for three weeks to prove ROI.Self-healing infrastructure is here and it's not about replacing SREsAutonomous healing infrastructure is running in production serving millions of users, and the difference from past attempts is reasoning capability. Systems can finally understand context and make decisions that used to need human judgment. Most orgs are stuck at Level 2 (automated detection, human fixes) but we've deployed Level 5 (predictive prevention) for specific failure classes. Real results: memory leak auto-remediation in 7 minutes vs 35 minutes with humans, 73% autonomous resolution rate, 81% reduction in after-hours pages.The architecture needs four pieces: decision engine, safety sandbox, action library, and learning loop. The future of infrastructure is autonomous, question is whether you can afford not to build it.7 Days Remaining: Book Your Seat NowGet 30% OffUse code FINAL30Deep DiveDORA figured out which AI capabilities actually matterDORA's 2025 report on 5,000 tech professionals found AI adoption is universal but success varies wildly because AI amplifies what you already are: makes high performers better and struggling teams worse. They identified seven capabilities that determine if AI helps or hurts: clear AI stance, healthy data ecosystems, AI-accessible internal data, strong version control, small batches, user-centric focus, and quality platforms.AWS Security Agent does automated pentesting (and it's actually useful)AWS launched Security Agent at re:Invent: An AI agent that runs continuous penetration testing on your apps, currently in free preview. A test against DVWA took ~2 hours and found tons of vulns with actual PoC steps to reproduce, not vague scanner output. Definitely helps reduce pentest time but you still need your own manual testing: think of it as a teammate, not a replacement.AWS Direct Connect now supports chaos engineering with FISAWS Direct Connect now integrates with Fault Injection Service so you can run controlled chaos experiments testing BGP session disruptions on your Virtual Interfaces. You can validate that traffic actually routes to redundant VIs when the primary BGP session fails and your apps keep working as expected.Basically chaos engineering for your Direct Connect architecture before a real outage proves your failover doesn't work.Orca and Wiz dropped their patent lawsuit slugfestOrca and Wiz agreed to dismiss all claims in their dueling patent lawsuits after the US Patent Board invalidated three of Orca's six asserted patents for lacking novelty. The whole mess started in July 2023 when Orca accused Wiz of copying their architecture, Wiz countersued, and now it's over 10 months after Google agreed to acquire Wiz for $32 billion. Orca's worth $1.8B by comparison and has shrunk headcount 7% while Wiz nearly tripled to 3,150 employees.📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day! *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

Shreyans from Packt
16 Dec 2025
8 min read
Save for later

How Google Built a Kubernetes Cluster with 130,000 nodes

Shreyans from Packt
16 Dec 2025
8 min read
AWS launched a DevOps Agent that actually debugs production for you CloudPro #115 Elevate Your Cloud Security Strategy with Dark Reading As a cloud security professional, you need cutting-edge insights to stay ahead of evolving vulnerabilities. The Dark Reading daily newsletter provides in-depth analysis of cloud vulnerabilities, advanced threat detection, and risk mitigation strategies. Stay informed on zero-trust architecture, compliance frameworks, and securing complex multi-cloud and hybrid environments. Signup to newsletter In today's issue, we'll look at: Google pushes Kubernetes to 130K nodes (yes, really), AWS launches an AI agent that debugs production while you sleep, and the network jobs market sends mixed signals: AI certs pay 12% more while automation threatens to eliminate a fifth of IT roles. Plus, hard lessons from recent AWS and Cloudflare outages that went global from single subsystem failures. Cheers, Shreyans Singh Editor-in-Chief Ransomware Just Hit your AWS Cloud. What Happens Next? Join us for an immersive simulation that’ll let you experience a fictionalized ransomware attack, without any of the actual consequences. You'll witness: -The first suspicious alert -The shocking depth of the breach -A heart-stopping realization about compromised backups -The impossible choice: pay or rebuild? Don't just hear about ransomware. Experience it. Learn how to be truly cyber resilient. Save My Spot This Week in Cloud How Google Built a Kubernetes Cluster with 130,000 nodes Google has been testing GKE at 130,000 nodes, twice their official support limit. They're hitting 1,000 pods/sec scheduling throughput with P99 startup under 10 seconds, used Kueue to preempt 39K pods in 93 seconds when priorities shifted, and kept the control plane stable with 1M+ objects in the datastore. The architectural wins: consistent reads from cache (KEP-2340), snapshottable API server cache (KEP-4988), and Spanner-backed storage handling 13K QPS just for lease updates. This matters because we're moving from chip-limited to power-limited infrastructure. One GB200 pulls 2.7KW, so at 100K+ nodes you're talking hundreds of megawatts across multiple data centers. Google's betting on multi-cluster orchestration becoming the norm (MultiKueue, managed DRANET). Gang scheduling via Kueue now, native Kubernetes support coming (KEP-4671). Sneak Peek into Kubernetes v1.35 K8s 1.35 is finally killing off cgroup v1 support. If you're still running nodes on ancient distros without cgroup v2, your kubelet won't start. Also deprecating ipvs mode in kube-proxy since maintaining feature parity became impossible; nftables is the way forward on Linux. On the features side: in-place pod resource updates hitting GA (no more pod restarts for cpu/memory changes), native pod certificates for mTLS without needing SPIFFE/SPIRE, numeric comparisons for taints (finally can do SLA-based scheduling with Gt/Lt operators), user namespaces maturing through beta (container root remapped to unprivileged host UID), and image volumes likely enabled by default (mount OCI artifacts directly as volumes). Node declared features are going alpha too - nodes will publish supported capabilities to avoid version skew scheduling failures. AWS launched a DevOps Agent that actually debugs production for you No more 3am war rooms might actually be realistic now. DevOps Agent is a "frontier agent" that runs autonomously for hours investigating incidents while you sleep. It connects to CloudWatch, Datadog, Dynatrace, GitHub/GitLab, ServiceNow, and builds an application topology map automatically. When stuff breaks, it correlates metrics/logs/deployments, identifies root causes, updates Slack channels, and suggests mitigations. It has a web app for operators to manually trigger investigations or steer the agent mid-investigation. The interesting part: it analyzes past incidents to recommend systematic improvements (multi-AZ gaps, monitoring coverage, deployment pipeline issues). It also creates detailed mitigation specs that work with agentic dev tools. Supports custom tool integration via MCP servers for your internal systems. AWS will manage your Argo CD, ACK, and KRO now AWS just launched EKS Capabilities: fully managed versions of Argo CD, AWS Controllers for Kubernetes (ACK), and Kube Resource Orchestrator (KRO) that run in AWS-owned accounts, not your cluster. They handle scaling, patching, upgrades, and breaking change analysis automatically. SSO with IAM Identity Center for Argo CD, ACK has resource adoption for migrating from Terraform/CloudFormation, KRO for building reusable resource bundles. This is basically AWS saying "stop running your own GitOps infrastructure." Makes sense given 45% of K8s users already run Argo CD in production (per 2024 CNCF survey). Early Bird Offer: Get 40% Off Use code EARLY40 Early Bird Offer: Get 40% Off Use code EARLY40 Deep Dive Controlling Kubernetes Network Traffic Ingress NGINX is retiring and it got me thinking about how convoluted network traffic control has become in Kubernetes. You've got your CNI for connectivity, network policies for security, ingress controllers or Gateway API for north-south routing, maybe a service mesh for east-west traffic, and honestly most apps don't need all of this. The real decision most people face is simpler: ingress controller vs Gateway API. Here's the thing: if you just need basic HTTP/HTTPS routing and you're already comfortable with nginx or Traefik, stick with ingress controllers. They work, they're stable, tooling is mature. Gateway API makes sense if you need advanced stuff like protocol-agnostic routing, cross-namespace setups, or you're running multi-team environments where role separation matters. All three clouds (AWS ALB Controller, Azure AGIC, GKE Ingress) have solid managed options for both approaches now. Gateway API is clearly the future, but "future-proof" doesn't mean you need to migrate today. Network jobs roundup: AI certs pay, skills gap persists, mixed employment signals The network jobs market is weird right now. AI certifications are commanding 12% higher pay year-over-year while overall IT skills premiums dropped 0.7%. CompTIA just launched AI Infrastructure and AITECH certs, Cisco added wireless-only tracks (CCNP/CCIE Wireless launching March 2026). Meanwhile unemployment for tech workers sits at 2.5-3% depending on who's counting, but large enterprises keep announcing layoffs while small/midsize companies are actually hiring. Skills gap is real though- 68% of orgs say they're understaffed in AI/ML ops, 65% in cybersecurity. Telecom lost 59% of positions to automation, and survey data shows 18-22% of IT workforce could be eliminated by AI in the next 5 years. But demand for AI/ML, cloud architecture, and security skills keeps growing. The takeaway: upskill in AI and automation or get left behind, especially if you're in support, help desk, or legacy infrastructure roles. Three Lessons from the Recent AWS and Cloudflare Outages AWS US-EAST-1 went down for 15 hours in October (DNS race condition in DynamoDB), Cloudflare ate it in November (oversized Bot Management config file crashed proxies globally). Both followed the same pattern: small defect in one subsystem cascaded everywhere. The lessons are obvious but worth repeating: design out single points of failure with multi-region/multi-cloud by default, use AI-powered monitoring to correlate signals and automate rollback (monitoring without automated response is just expensive alerting), and actually practice your DR plan regularly because you fall to the level of your practice, not rise to your runbook. The deeper point: complexity keeps growing with every new region and service, multiplying ways a small change can blow up globally. The answer is designing for failure: limit blast radius, decouple planes, automate validation. No provider is immune, so your architecture needs to assume failures will happen and route around them automatically. Test your DR plan with chaos engineering, not hope- Google SRE Practice Lead Google's SRE team wrote a piece on why your disaster recovery plan probably doesn't work and how chaos engineering proves it. The premise: systems change constantly (microservices, config updates, API dependencies), so that DR doc you wrote last quarter is already outdated. Chaos engineering lets you run controlled experiments—simulate database failovers, regional outages, resource exhaustion, and measure if you actually meet your SLOs during the disaster. It's not about breaking things randomly. You define steady state, form a hypothesis (like "traffic will failover to secondary region in 3 minutes with <1% errors"), inject a specific failure, and measure what happens. The key insight is connecting chaos to SLOs. Traditional DR drills might "pass" because backup systems came online, but if it took 20 minutes and burned your entire error budget, customers saw you as down. Start small with one timeout or retry test, build confidence, scale from there. Stelvio: AWS for Python devs Stelvio is a Python framework that lets you define AWS infrastructure in pure Python with smart defaults handling the annoying bits. Run stlv init, write your infra in Python (DynamoDB tables, Lambda functions, API Gateway routes), hit stlv deploy and you're done. No Terraform, no CDK yaml hell, no mixing infrastructure code with application code. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

Shreyans from Packt
08 Dec 2025
3 min read
Save for later

Master AWS & Land $130K+ Cloud Roles

Shreyans from Packt
08 Dec 2025
3 min read
Flash Sale: 40% Off ends in 48 HoursCloudPro #114FLASH SALE: 40% OFF for 48 HOURSFlash Sale closes in 48 hoursBook Your SeatUse code FLASH40 for 40% Off"I went from DevOps engineer to Solutions Architect within 6 months of getting certified. The jump wasn't just in title. It was $40K more." - Michael T, Senior Solutions Architect at Atlassian.That's the real impact of mastering AWS architecture.Companies aren't just looking for people who can use AWS. They want engineers who understand architecture, can design for scale, and make smart decisions under pressure. Whether you're aiming for certification or leveling up your cloud design skills, learning to think like an AWS Solutions Architect opens doors to roles paying $130K–$180K.Here's the problem: most people spend months studying services in isolation and never learn how real architects make design decisions.That's exactly why we created this 5-hour intensive workshop with AWS experts:Saurabh Shrivastava: Global Solutions Architect Leader, AWSKate Gawron: AWS-certified Database SpecialistKamal Arora: Director of Solutions Architecture, AWSAshutosh Dubey: GenAI Specialist Architect, AWSFlash Sale closes in 48 hoursBook Your SeatUse code FLASH40 for 40% OffFellow attendees already secured their spots:Principal Cloud Engineer, OptumSr. Cloud Engineer, Cardinal HealthPrincipal Data Analyst, UnitedHealth GroupDatabase Engineer, TelenetSoftware Architect, adesso SEWhat you'll walk away with:A clear mental model of how to think and decide like an AWS Solutions ArchitectHands-on experience building highly available, multi-AZ architectures through live labsMastery of core AWS patterns: VPC design, high availability, serverless, and container workloadsProven strategies for making smart trade-offs across availability, performance, resilience, and costA two-week post-event study plan, curated resources, and 120-day access to full workshop recordingsA Credly digital badge recognizing your completionFlash Sale closes in 48 hoursBook Your SeatUse code FLASH40 for 40% OffJanuary 17th | 9am - 2pm EDTWe’d be delighted to see you there.Cheers,Shreyans SinghEditor-in-ChiefSponsored:The GenAI Experience Users Already Love, Securely Accessing Your SaaS ProductFrontegg AgentLink gives you the confidence that any access to your SaaS product is done responsibly, so you can fast track your role in the AI revolution.Get AgentLink now📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day! *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
Modal Close icon
Modal Close icon