Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-three-ways-serverless-apis-can-accelerate-enterprise-innovation-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
5 min read
Save for later

Three ways serverless APIs can accelerate enterprise innovation from Microsoft Azure Blog > Announcements

Matthew Emerick
07 Oct 2020
5 min read
With the wrong architecture, APIs can be a bottleneck to not only your applications but to your entire business. Bottlenecks such as downtime, low performance, or high application complexity, can result in exaggerated infrastructure and organizational costs and lost revenue. Serverless APIs mitigate these bottlenecks with autoscaling capabilities and consumption-based pricing models. Once you start thinking of serverless as not only a remover-of-bottlenecks but also as an enabler-of-business, layers of your application infrastructure become a source of new opportunities. This is especially true of the API layer, as APIs can be productized to scale your business, attract new customers, or offer new services to existing customers, in addition to its traditional role as the communicator between software services. Given the increasing dominance of APIs and API-first architectures, companies and developers are gravitating towards serverless platforms to host APIs and API-first applications to realize these benefits. One serverless compute option to host API’s is Azure Functions, event-triggered code that can scale on-demand, and you only pay for what you use. Gartner predicts that 50 percent of global enterprises will have deployed a serverless functions platform by 2025, up from only 20 percent today. You can publish Azure Functions through API Management to secure, transform, maintain, and monitor your serverless APIs. Faster time to market Modernizing your application stack to run microservices on a serverless platform decreases internal complexity and reduces the time it takes to develop new features or products. Each serverless function implements a microservice. By adding many functions to a single API Management product, you can build those microservices into an integrated distributed application. Once the application is built, you can use API Management policies to implement caching or ensure security requirements. Quest Software uses Azure App Service to host microservices in Azure Functions. These support user capabilities such as registering new tenants and application functionality like communicating with other microservices or other Azure platform resources such as the Azure Cosmos DB managed NoSQL database service. “We’re taking advantage of technology built by Microsoft and released within Azure in order to go to market faster than we could on our own. On average, over the last three years of consuming Azure services, we’ve been able to get new capabilities to market 66 percent faster than we could in the past.” - Michael Tweddle, President and General Manager of Platform Management, Quest Quest also uses Azure API Management as an serverless API gateway for the Quest On Demand microservices that implement business logic with Azure Functions and to apply policies that control access, traffic, and security across microservices. Modernize your infrastructure Developers should be focusing on developing applications, not provisioning and managing infrastructure. API management provides a serverless API gateway that delivers a centralized, fully managed entry point for serverless backend services. It enables developers to publish, manage, secure, and analyze APIs on at global scale. Using serverless functions and API gateways together allows organizations to better optimize resources and stay focused on innovation. For example, a serverless function provides an API through which restaurants can adjust their local menus if they run out of an item. Chipotle turned to Azure to create a unified web experience from scratch, leveraging both Azure API Management and Azure Functions for critical parts of their infrastructure. Calls to back-end services (such as ordering, delivery, and account management and preferences) hit Azure API Management, which gives Chipotle a single, easily managed endpoint and API gateway into its various back-end services and systems. With such functionality, other development teams at Chipotle are able to work on modernizing the back-end services behind the gateway in a way that remains transparent to Smith’s front-end app. “API Management is great for ensuring consistency with our API interactions, enabling us to always know what exists where, behind a single URL,” says Smith. “There are lots of changes going on behind the API gateway, but we don’t need to worry about them.”- Mike Smith, Lead Software Developer, Chipotle Innovate with APIs Serverless APIs are used to either increase revenue, decrease cost, or improve business agility. As a result, technology becomes a key driver of business growth. Businesses can leverage artificial intelligence to analyze API calls to recognize patterns and predict future purchase behavior, thus optimizing the entire sales cycle. PwC AI turned to Azure Functions to create a scalable API for its regulatory obligation knowledge mining solution. It also uses Azure Cognitive Search to quickly surface predictions found by the solution, embedding years of experience into an AI model that easily identifies regulatory obligations within the text. “As we’re about to launch our ROI POC, I can see that Azure Functions is a value-add that saves us two to four weeks of work. It takes care of handling prediction requests for me. I also use it to extend the model to other PwC teams and clients. That’s how we can productionize our work with relative ease.”- Todd Morrill, PwC Machine Learning Scientist-Manager, PwC Quest Software, Chipotle, and PwC are just a few Microsoft Azure customers who are leveraging tools such as Azure Functions and Azure API Management to create an API architecture that ensures your API’s are monitored, managed, and secure. Rethinking your API approach to use serverless technologies will unlock new capabilities within your organization that are not limited by scale, cost, or operational resources. Get started immediately Learn about common serverless API architecture patterns at the Azure Architecture Center, where we provide high-level overviews and reference architectures for common patterns that leverage Azure Functions and Azure API Management, in addition to other Azure services. Reference architecture for a web application with a serverless API. 
Read more
  • 0
  • 0
  • 19543

article-image-docker-announces-collaboration-with-microsofts-net-at-dockercon-2019
Savia Lobo
02 May 2019
4 min read
Save for later

Docker announces collaboration with Microsoft’s .NET at DockerCon 2019

Savia Lobo
02 May 2019
4 min read
Using Docker and .NET together was brought up in the year 2017, where Microsoft explained the cons of using them together. Last year’s DockerCon update showed multiple .NET demos showing how one can use Docker for modern applications and for older applications that use traditional architectures. This eased users to containerize .NET applications using tools from both Microsoft and Docker. The team said that “most of their effort to improve the .NET Core Docker experience in the last year has been focused on .NET Core 3.0.” “This is the first release in which we’ve made substantive runtime changes to make CoreCLR much more efficient, honor Docker resource limits better by default, and offer more configuration for you to tweak”. Microsoft writes in one of their blog posts. The team also mentions that they are invested in making .NET Core a true container runtime. They look forward to hardening .NET’s runtime to make it container-aware and function efficiently in low-memory environments. Let’s have a look at the different advantages of bring Docker and .NET together Docker + .NET advantages Lesser memory allocation and fewer GC heaps by default With .NET 3.0, the team reduced the minimal generation 0 GC allocation budget to better align with modern processor cache sizes and cache hierarchy. With this, the initial allocation size, which was unnecessarily large, was significantly reduced without any perceivable loss of performance thus bringing in tens of percentage points of improvements. The team also mention about a new policy for determining how many GC heaps to create. This is most important on machines where a low memory limit is set, but no CPU limit is set on a machine with many CPU cores. The GC now reserves a memory segment with a minimum size of 16 MB per heap. This limits the number of heaps the GC will create. Both these changes result in lower memory usage by default and make the default .NET Core configuration better in more cases. Added PowerShell to .NET Core SDK container Images PowerShell Core has been added to the .NET Core SDK Docker container images, per requests from the community. There are two main scenarios that having PowerShell inside the .NET Core SDK container image enables, which were not otherwise possible: Write .NET Core application Dockerfiles with PowerShell syntax, for any OS. Write .NET Core application/library build logic that can be easily containerized. Note: PowerShell Core is now available as part of .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK. .NET Core Images now available via Microsoft Container Registry Microsoft teams are now publishing container images to the Microsoft Container Registry (MCR). There are two primary reasons for this change: Syndicate Microsoft-provided container images to multiple registries, like Docker Hub and Red Hat. Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images. Platform matrix and support With .NET Core, we try to support a broad set of distros and versions. Following is the policy for each distro: Alpine: support tip and retain support for one quarter (3 months) after a new version is released. Currently, 3.9 is tip and the team will stop producing 3.8 images in a month or two. Debian: support one Debian version per latest .NET Core version. This is also the default Linux image used for a given multi-arch tag. For .NET Core 3.0, the team may publish Debian 10 based images. However, they may produce Debian 9 based images for .NET Core 2.1 and 2.2, and Debian 8 images for earlier .NET Core versions. Ubuntu:  support one Ubuntu version per the latest .NET Core version(18.04). Also, as the team gets to know of a new Ubuntu LTS versions, they will start supporting non-LTS Ubuntu versions a means of validating the new LTS versions. For Windows, they support the cross-product of Nano Server and .NET Core versions. ARM Architecture The team plans to add support for ARM64 on Linux with .NET Core 3.0, complementing the ARM32 and X64 support already in place. This will enable .NET Core to be used in even more environments. Apart from these advantages, the team has also added support for Docker Memory and CPU Limits. To know more about this partnership in detail, read Microsoft’s official blogpost. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 19539

article-image-former-npm-cto-introduces-entropic-a-federated-package-registry-with-a-new-cli-and-much-more
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more!

Amrata Joshi
03 Jun 2019
3 min read
Yesterday, at JSConfEU '19, the team behind Entropic announced Entropic, a federated package registry with a new CLI that works smoothly with the network.  Entropic is also Apache 2 licensed and is federated. It mirrors all packages that users install from the legacy package manager. Entropic offers a new file-centric API and a content-addressable storage system that minimizes the amount of data that should be retrieved over a network. This file-centric approach also applies to the publication API. https://www.youtube.com/watch?v=xdLMbvEc2zk C J Silverio, Principal Engineer at Eaze said during the announcement, “I actually believe in open source despite everything I think it's good for us as human beings to give things away to each other but I think it's important. It's going to be plenty for my work so Chris tickets in news isn't it making out Twitter moment now Christensen and I have the natural we would like to give something away to you all right now.” https://twitter.com/kosamari/status/1134876898604048384 https://twitter.com/i/moments/1135060936216272896 https://twitter.com/colestrode/status/1135320460072296449 Features of Entropic Package specifications All the Entropic packages are namespaced, and a full Entropic package spec includes the hostname of its registry. The package specifications are also fully qualified with a namespace, hostname, and package name. They appear to be: namespace@example.com/pkg-name. For example, the ds cli is specified by chris@entropic.dev/ds. If a user publishes a package to their local registry that depends on packages from other registries, then the local instance will mirror all the packages on which the user’s package depend on. The team aims to keep each instance entirely self-sufficient, so installs aren’t dependent on a resource that might vanish. And the abandoned packages are moved to the abandonware namespace. The packages can be easily updated by any user in the package's namespace and can also have a list of maintainers. The ds cli Entropic requires a new command-line client known as ds or "entropy delta". According to the Entropic team, the cli doesn't have a very sensible shell for running commands yet. Currently, if users want to install packages using ds then they can now run ds build in a directory with a Package.toml to produce a ds/node_modules directory. The GitHub page reads, “This is a temporary situation!” But Entropic appears to be more like an alternative to npm as it seeks to address the limitations of the ownership model of npm.Inc. It aims to shift from centralized ownership to federated ownership, to restore power back to the commons. https://twitter.com/deluxee/status/1135489151627870209 To know more about this news, check out the GitHub page. GitHub announces beta version of GitHub Package Registry, its new package management service npm Inc. announces npm Enterprise, the first management code registry for organizations Using the Registry and xlswriter modules
Read more
  • 0
  • 0
  • 19533

article-image-introducing-script-8-an-8-bit-javascript-based-fantasy-computer-to-make-retro-looking-games
Bhagyashree R
28 Jan 2019
2 min read
Save for later

Introducing SCRIPT-8, an 8-bit JavaScript-based fantasy computer to make retro-looking games

Bhagyashree R
28 Jan 2019
2 min read
Adding to the list of several fantasy consoles/computers is the newly-introduced SCRIPT-8, written by Gabriel Florit, a graphics reporter at the Washington Post who also likes working with augmented reality. https://twitter.com/gabrielflorit/status/986716413254610944 SCRIPT-8 is a JavaScript-based fantasy computer to make, play, and, share tiny retro-looking games. Based on Bret Victor’s Inventing on principle and Learnable programming, it provides programmers live-coding experience, which means the program’s output updates as they code. The games built using SCRIPT-8 are called cassettes. These cassettes are recorded at a URL which you can share with anyone and play with a keyboard or gamepad. You can also make your own version of an existing cassette by changing its code, art, or music, and record it to a different cassette. What are SCRIPT-8’s features? A code editor which provides you with immediate feedback. A slider using which you can easily update numbers without typing. A time-traveling tool for pausing and rewinding the game. You can see a character’s past and future paths with provided buttons. A sprite editor where the changes are reflected in the game instantly. A map editor to create new paths. A music editor using which you can create phrases, group them into chains, and turn those into songs. You can read more about SCRIPT-8 on its website. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers Fortnite server suffered a minor outage, Epic Games was quick to address the issue Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games
Read more
  • 0
  • 0
  • 19527

article-image-5-things-you-need-to-know-about-java-10
Amarabha Banerjee
07 Jun 2018
3 min read
Save for later

5 Things you need to know about Java 10

Amarabha Banerjee
07 Jun 2018
3 min read
Oracle has announced the release of Java 10 version on March 20. While this is not an LTS version, there are few changes in this version which are worth noting. In this article we’ll look at  5 of the most important things you’ll need to watch out for, especially if you’re a Java developer. Java releases long term support versions in every 3 year. As per this scheduling, their future long term support version, Java 11 will be releasing in Fall 2018. Java 10 is a precursor to that and contains some important changes which will take a clearer shape in the next version. Java 10 is trying to emulate some of the popular features of Scala and Kotlin. One of the primary reasons can be the growing popularity of Kotlin in both web and mobile development domain and also the dynamic typing capability in Scala and Kotlin both.  The introduction of local variable type is one of them. This feature implies that variables can now be declared as “var” and when you assign a certain integer or a string to it then the compiler will automatically know what type of variable it is. Although this doesn’t make Java a dynamically typed language like Python, still this allows a lot more flexibility for the programmers and lets them avoid boilerplates in their code. There are 2 JEPs in JDK 10 that focus on improving the current Garbage Collection (GC) elements. The first one, Garbage-Collector Interface (JEP 304) will introduce a clean garbage collector interface to help improve the source code isolation of different garbage collectors. In current Java versions there are bits and pieces of GC source files scattered all over the HotSpot sources. This becomes an issue when implementing a new garbage collector, since developers have to know where to look for those source files. One of the main goals of this JEP is to introduce better modularity for HotSpot internal GC code, have a cleaner GC interface and make it easier to implement new collectors. Java 10 promises to become much faster than its previous version by making the full garbage collector parallel. This is a welcome move and change from the version 9 since this allows the developers scope to better allocate memory and use the GC (Garbage Collector) in parallel. The GC  in the previous versions didn’t have the capability to load values in parallel and that made it heavy and difficult to operate for complex applications. The present parallel GC removes that factor and makes it much more lightweight and efficient. Java 10 enables programmers to allow heap allocation on alternative memory devices. This feature lets the Java VM decide on the most important tasks and then allocate maximum memory for those priority processes with other processes are allocated to alternative memory. This helps in fastening up the overall process. This change is important for the Java developers because this will help them in better and efficient memory management and hence will increase the performance of their applications. With these changes, Java 10 has opened up the doors for a more open and flexible language which is looking towards the future. With Kotlin breathing down its neck as a worthy alternative, the stage is set for Java to work towards a more dynamic and easy to use power packed version 11 in 2018 fall. We would be waiting for that along with the Java developers for sure. What can you expect from the upcoming Java 11 JDK? Oracle reveals issues in Object Serialization. Plans to drop it from core Java. Java Multithreading: How to synchronize threads to implement critical sections and avoid race conditions  
Read more
  • 0
  • 0
  • 19516

article-image-us-customs-and-border-protection-reveal-data-breach-that-exposed-thousands-of-traveler-photos-and-license-plate-images
Savia Lobo
11 Jun 2019
3 min read
Save for later

US Customs and Border Protection reveal data breach that exposed thousands of traveler photos and license plate images

Savia Lobo
11 Jun 2019
3 min read
Yesterday, the U.S. Customs and Border Protection(CBP) revealed a data breach occurrence exposing the photos of travelers and vehicles traveling in and out of the United States. CBP first learned of the attack on May 31 and said that none of the image data had been identified “on the Dark Web or Internet”. According to a CBP spokesperson, one of its subcontractors transferred images of travelers and license plate photos collected by the agency to its internal networks, which were then compromised by the attack. The agency declined to name the subcontractor that was compromised. They also said that its own systems had not been compromised. “A spokesperson for the agency later said the security incident affected “fewer than 100,000 people” through a “few specific lanes at a single land border” over a period of a month and a half”, according to TechCrunch. https://twitter.com/AJVicens/status/1138195795793055744 “No passport or other travel document photographs were compromised and no images of airline passengers from the air entry/exit process were involved,” the spokesperson said. According to The Register’s report released last month, a huge amount of internal files were breached from the firm Perceptics and were being offered for free on the dark web to download. The company’s license plate readers are deployed at various checkpoints along the U.S.-Mexico border. https://twitter.com/josephfcox/status/1138196952812806144 Now, according to the Washington Post, “in the Microsoft Word document of CBP’s public statement, sent Monday to Washington Post reporters, included the name “Perceptics” in the title: CBP Perceptics Public Statement”. “Perceptics representatives did not immediately respond to requests for comment. CBP spokeswoman Jackie Wren said she was “unable to confirm” if Perceptics was the source of the breach.”, the Washington post further added. In a statement to The Post, Sen. Ron Wyden (D-Ore.) said, “If the government collects sensitive information about Americans, it is responsible for protecting it — and that’s just as true if it contracts with a private company.” “Anyone whose information was compromised should be notified by Customs, and the government needs to explain exactly how it intends to prevent this kind of breach from happening in the future”, he further added. ACLU senior legislative counsel, Neema Singh Guliani said that the breach “further underscores the need to put the brakes” on the government’s facial recognition efforts. “The best way to avoid breaches of sensitive personal data is not to collect and retain such data in the first place,” she said. Jim Balsillie on Data Governance Challenges and 6 Recommendations to tackle them US blacklist China's telecom giant Huawei over threat to national security Privacy Experts discuss GDPR, its impact, and its future on Beth Kindig’s Tech Lightning Rounds Podcast
Read more
  • 0
  • 0
  • 19505
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-joel-spolsky-makes-room-for-a-new-ceo-as-he-becomes-chairman-of-stack-overflow
Bhagyashree R
01 Apr 2019
3 min read
Save for later

Stack Overflow is looking for a new CEO as Joel Spolsky becomes Chairman

Bhagyashree R
01 Apr 2019
3 min read
Last week, Joel Spolsky announced that he is stepping down as the CEO of Stack Overflow and taking up a new role as Chairman of the site's Board. That means that one of the most popular question and answer sites on the planet - and one of the most important for software developers - is now looking for its new CEO. https://twitter.com/spolsky/status/1111267189133316097 Back in 2008, Spolsky, along with with Jeff Atwood, co-founded Stack Overflow with the idea of bringing voting and editing to a Q&A site. The idea was to make it easier for programmers to find the right answer, instead of scrolling endlessly in a discussion forum. Explaining what makes Stack Overflow different from other Q&A sites on his blog, Joel on Software, Spolsky said, “already, it’s better than other Q&A sites, because you don’t have to read through a lot of discussion to find the right answer if it’s in there somewhere.” At a Microsoft conference, just six months after the site’s launch, Spolsky asked how many developers were using the site. He and Atwood were pleasantly surprised when one-third of the crowd rose their hands. Today, however, practically every developer visits Stack Overflow to learn from other developers. In addition to the huge user base, the company has now grown to almost 300 employees and has achieved $70m in revenue last year. For the future of Stack Overflow, Spolsky hopes to make the platform more inclusive and welcoming for new users. “The type of people Stack Overflow serves has changed, and now, as a part of the developer ecosystem, we have a responsibility to create an online community that is far more diverse, inclusive, and welcoming of newcomers,” he adds. Many Stack Overflow users have a love/hate relationship with the platform. Some developers find the site intimidating and unwelcoming - a fact that Stack Overflow itself has acknowledged in the past. For example, threads can sometimes become filled with condescending, sarcastic, and dismissive comments, particularly when newcomers fail to follow site rules. They are simply expected to know things right from the start. A Stack Overflow user shares a solution to this, “I think if new users were given a walkthrough of the site, how it works, and what's expected, it would be MUCH more welcoming, and the comments (while still sometimes unnecessarily sarcastic) would be more warranted, since the requirements have been clearly laid out.” Saying things like “thank you” and “please” are considered noise and a distraction from the actual point. There are endless examples of such behavior, but they are not really limited to newbies. When a woman seeking help on a Flexbox margins issue posted on Stack Overflow, she got a message saying, “if you don’t get this…you have no business making a portfolio as a web developer”. April Wensel, the founder of Compassionate Coding, has been writing consistently about the problems users sometimes face on Stack Overflow, sharing numerous examples of people being rude and demeaning. She hopes that the next CEO will take the right measures to make the site inclusive and “more human”: https://twitter.com/aprilwensel/status/1111331785730719745?s=19 Read Spolsky's announcement at the Stack Overflow blog. Stack Overflow celebrates its 10th birthday as the most trusted developer community 4 surprising things from StackOverflow’s 2018 survey. StackOverflow just updated its developers’ salary calculator; includes 8 new countries in 2018.  
Read more
  • 0
  • 0
  • 19500

article-image-how-the-titan-m-chip-will-improve-android-security
Prasad Ramesh
18 Oct 2018
4 min read
Save for later

How the Titan M chip will improve Android security

Prasad Ramesh
18 Oct 2018
4 min read
Aside from the big ugly notch on the Pixel XL 3, both the XL 3 and the Pixel 3 will sport a new security chip called the Titan M. This dedicated chip raises the security game in these new Pixel devices. The M is... well a good guess—mobile. The Titan chip was previously used internally at Google. This is another move towards making better security available at the hands of everyday consumers after Google made the Titan security key for available for purchase. What does the Titan M do? The Titan M is an individual low-power security chip designed and manufactured by Google. This is not a part of Snapdragon 845 powering the new Pixel devices. It performs a couple of security functions at the hardware level. Store and enforce the locks and rollback counters used by Android Verified Boot to prevent attackers from unlocking the bootloader. Securely locks and encrypts your phone and further limits invalid attempts of unlocking the device. Apps can use the Android Strongbox Keymaster module to generate and store keys on the Titan M. The Titan M chip has direct electrical connections to the Pixel's side buttons that prevent an attacker from faking button presses. Factory-reset policies that enforce rules with which lost or stolen devices can be restored only by the owner. Ensures that even Google themselves can't unlock a phone or install firmware updates without the passcode set by the owner with Insider Attack Resistance. An overview of the Titan M chip Since the Titan M is a separate chip, it protects against hardware-level attacks such as Rowhammer, Spectre, and Meltdown. Google has complete control and supervision over building this chip, right from the silicon stages. They have taken care to incorporate features like low power usage, low-latency, hardware cryptographic acceleration, tamper detection, and secure, timely firmware updates to the chip. On the left is the first generation Titan chip and on the right is the new Titan M chip. Source: Google Blog Titan M CPU The CPU used is an ARM Cortex-M3 microprocessor which is specially hardened against side-channel attacks. It has been augmented with defensive features to detect and act upon abnormal conditions. The CPU core also exposes several control registers to join access with chip configuration settings and peripherals. The Titan M verifies the signature of its firmware using a public key built into the chip. On signature verification, the flash is locked to prevent any modification. It also has a large programmable coprocessor for public key algorithms. Encryption in the chip This new chip also features hardware accelerators like AES and SHA. The accelerators are flexible meaning they can either be initialized with firmware provided keys or via chip-specific and hardware-bound keys generated by the Key Manager module. The chip-specific keys are generated internally with the True Random Number Generator (TRNG). Hence such keys are limited entirely to the chip internally and are not available outside the chip. Google tried to pack maximum security features into Titan M's 64 KB RAM. The RAM contents of the chip can be preserved even during battery saving mode when most hardware modules are turned off. Here’s a diagram showing the chip components. Source: Google Blog Google is aware of what goes into each chip from logic gates to the boot code. The chip allows higher security in areas like two-factor authentication, medical device control, and P2P payments among other potential future uses. The Titan M firmware source code will be publicly available soon. For more details, visit the Google Blog. Google Titan Security key with secure FIDO two factor authentication is now available for purchase Google introduces Cloud HSM beta hardware security module for crypto key security Google’s Protect your Election program: Security policies to defend against state-sponsored phishing attacks, and influence campaigns
Read more
  • 0
  • 0
  • 19492

article-image-scroll-snapping-and-other-cool-css-features-come-to-firefox-68
Fatema Patrawala
01 Aug 2019
2 min read
Save for later

Scroll Snapping and other cool CSS features come to Firefox 68

Fatema Patrawala
01 Aug 2019
2 min read
Yesterday, the Firefox team announced details of the new CSS features added to the Firefox 68. Earlier this month they had announced the release of Firefox 68 with a bunch of CSS additions and changes. Let us take a look at each: CSS Scroll Snapping The update in Firefox 68 brings the Firefox implementation in line with Scroll Snap as implemented in Chrome and Safari. In addition, it removes the old properties which were part of the earlier Scroll Snap Points Specification. The ::marker pseudo-element The ::marker pseudo-element helps in selecting the marker box of a list item. This will typically contain the list bullet, or a number. If you have ever used an image as a list bullet, or wrapped the text of a list item in a span in order to have different bullet and text colors, this pseudo-element is for you! With the marker pseudo-element, you can target the bullet itself. There are only a few CSS properties that may be used on ::marker. These include all font properties. Therefore you can change the font-size or family to be something different to the text. Using ::marker on non-list items A marker can only be shown on the list items, however you can turn any element into a list-item by using display: list-item. The official blog post covers a detailed example with codes on how you can perform this functionality. The ::marker pseudo-element is standardized in CSS Lists Level 3, and CSS Pseudo-elements Level 4, and currently implemented in Firefox 68 and Safari. CSS fixes in Firefox 68 Web developers suffer when a supported feature works differently in different browsers. These interoperability issues are often caused by the age of the web platform. Hence, the Firefox team has made many changes to the CSS specifications. Developers depend on the browsers to update their implementations to match the clarified spec. In the latest Firefox release, the team has got fixes for the ch unit, and list numbering shipping. In addition to changes to the implementation of CSS in Firefox, Firefox 68 brings some great new additions to Developer Tools to work with CSS. Take a look at the Firefox 68 release notes to get a full overview of all the changes and additions in Firefox 68.
Read more
  • 0
  • 0
  • 19476

article-image-how-has-rust-and-webassembly-evolved-in-2018
Prasad Ramesh
07 Dec 2018
3 min read
Save for later

How has Rust and WebAssembly evolved in 2018

Prasad Ramesh
07 Dec 2018
3 min read
In a blog post, the state of Rust and WebAssembly for 2018 was discussed by the Rust-Wasm team. The Rust and WebAssembly domain working group worked to make a shared vision into a reality: “Compiling Rust to WebAssembly should be the best choice for fast, reliable code for the Web.” With the evolution of ideas, another core value was formed: “Rust and WebAssembly is here to augment your JavaScript, not replace it.” Goals were set for the joint ecosystem. #1 JavaScript interoperation with zero-cost By leveraging zero-cost abstractions Rust enables fast and expressive code. The Rust team wanted to apply this principle to the whole JS interop infrastructure. Developers can write their own boilerplate to pass DOM nodes to wasm generated by Rust but that shouldn’t be the case. Hence they created wasm-bindgen as the foundation for JavaScript interoperation with zero cost. The communication between JavaScript and WebAssembly is facilitated by wasm-bindgen. This generates glue code which developers would have had to write themselves. With the wasm-bindgen ecosystem helps developers to: Exporting rich APIs from Rust-generated wasm libraries. This makes them callable from JavaScript. Import JavaScript and Web APIs into the Rust-generated wasm. #2 Rust-Generated Wasm as an NPM library Good integration is about fitting Rust-generated WebAssembly into the JavaScript’s distribution mechanisms. A big part of that is NPM. The Rust team built a wasm-pack to creating and publishing NPM packages from Rust and WebAssembly code. Sharing Rust-generated wasm modules is now as simple as: wasm-pack publish #3 To get developers productive fast The Rust team wrote a Rust and WebAssembly book to teach all the ins and outs of WebAssembly development with Rust. It features a tutorial to build an implementation of the Conway's Game of Life and teaches you how to write tests, debugging, and diagnosing slow code paths. Getting a Rust-WebAssembly project set up initially involves a boilerplate and configuration that new users may find difficult or experienced ones may find as a waste of time. Hence the Rust team has created a variety of project templates for different use cases: wasm-pack-template to create NPM libraries with Rust and Wasm. create-wasm-app to create Web applications built on top of Rust-generated wasm NPM libraries. rust-webpack-template to create whole Web applications with Rust, WebAssembly, and the Webpack bundler. rust-parcel-template to create whole Web applications with Rust, WebAssembly, and the Parcel bundler. #4 Rust-Generated Wasm needs to be testable and debuggable wasm can’t log any panics or errors because by default as it doesn’t have any “syscall” or I/O functionality. Imports have to be manually added for that, and then instantiate the module with appropriate functions. To remedy this, and to ensure that panics are always debuggable, the Rust team created the console_error_panic_hook crate. It redirects panic messages into the browser’s devtools console. For more details on the state of the joint ecosytem in 2018, visit the Rust and WebAssembly Blog. Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web Red Hat announces full support for Clang/LLVM, Go, and Rust WebAssembly – Trick or Treat?
Read more
  • 0
  • 0
  • 19465
article-image-googles-5-5m-cookie-privacy-settlement-that-paid-nothing-to-users-is-now-voided-by-a-u-s-appeals-court
Bhagyashree R
07 Aug 2019
3 min read
Save for later

Google's $5.5m ‘cookie’ privacy settlement that paid nothing to users is now voided by a U.S. appeals court

Bhagyashree R
07 Aug 2019
3 min read
Yesterday, the U.S. Court of Appeals for the Third Circuit voided Google's $5.5m ‘cookie’ privacy settlement that paid nothing to consumers. The settlement was meant to resolve the case against Google for violating user privacy by installing cookies in their browsers. This comes after the decision was challenged by the Center for Class Action Fairness (CCAF), an institution representing class members against unfair class action procedures and settlements. What this Google 'cookie' case was about The class-action case accuses Google of creating a web browser cookie that tracks a user’s data. It mentions that the cookie also tracked data of Safari or Internet Explorer users even if they properly configured their privacy settings. The plaintiffs claim that Google invaded their privacy under the California constitution and the state tort of intrusion upon seclusion. In February 2017,  U.S. District Judge Sue Robinson in Delaware ruled that Google will stop using cookies for Safari browsers and pay a $5.5 million settlement. This settlement will cover fees and costs of the class counsel, incentive awards for the named class representatives, and cy pres distributions. This did not include any direct compensation to class members. The six cy pres recipients were data privacy organizations who agreed to use the funds for researching and promoting browser privacy. Cy pres distributions, which means “as near as possible” allows the court to distribute the money from a class action settlement to a charitable organization. This is done when the settlement becomes impossible, impracticable, or illegal to perform. Some of the cy pres recipients had pre-existing associations with Google and the class counsel, which raised concern. “Through the proposed class-action settlement, the purported wrongdoer promises to pay a couple million dollars to class counsel and make a cy pres contribution to organizations it was already donating to otherwise (at least one of which has an affiliation with class counsel),” the Circuit Judge Thomas Ambro said. He noted that John Roberts, the U.S Chief Justice has previously expressed concerns about cy pres. Many federal courts also are quite skeptical about cy pres awards as they could prompt class counsel to put their own interests ahead of their clients’. Ambro further mentioned that the District Court’s fact-finding was insufficient. “In this context, we believe the District Court’s fact-finding and legal analysis were insufficient for us to review its order certifying the class and approving the fairness, reasonableness, and adequacy of the settlement. We thus vacate and remand for further proceedings in accord with this opinion.” CCAF objection to this settlement was overruled by U.S. District Court for the District of Delaware on February 2, 2017. Ted Frank, CCAF’s director, who is also a class member in this case, filed a notice of appeal on March 1, 2017. Ted Frank believes that the money awarded to the privacy groups should have been instead given to class members. The objection is also being supported by 13 state attorneys. “The state attorneys general agree with CCAF that the feasibility of distributing funds depends on whether it’s impossible to distribute funds to some class members, not whether it’s possible to distribute to all class members,” wrote CCAF. Now, the case is returned to the Delaware court. You can read more about Google’s Cookie Placement Consumer Privacy Litigation case on Hamilton Lincoln Law Institute. Google discriminates against pregnant women, an employee memo alleges Google Chrome to simplify URLs by hiding special-case subdomains Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage  
Read more
  • 0
  • 0
  • 19461

article-image-percona-announces-percona-distribution-for-postgresql-to-support-open-source-databases
Amrata Joshi
18 Sep 2019
3 min read
Save for later

Percona announces Percona Distribution for PostgreSQL to support open source databases 

Amrata Joshi
18 Sep 2019
3 min read
Yesterday, the team at Percona, an open-source database software, and services provider announced Percona Distribution for PostgreSQL to offer expanded support for open source databases. It provides a fully supported distribution of the database and management tools to the organizations so that running applications based on PostgreSQL can deliver higher performance. Based on v11.5 of PostgreSQL, Percona Distribution for PostgreSQL provides support of database for cloud or on-premises deployments. This new database distribution will be unveiled at Percona Live Europe in Amsterdam(30th September- 2nd). Percona Distribution for PostgreSQL includes the following open-source tools to manage database instances and ensure that the data is available, secure, and backed up for recovery: pg_repack, a third-party extension rebuilds PostgreSQL database objects without the need of a table lock. pgaudit is a third-party extension that gives in-depth session and/or object audit logging via the standard logging facility in PostgreSQL. This helps the PostgreSQL users in providing detailed audit logs for compliance and certification purposes. pgBackRest is a backup tool that is responsible for replacing the built-in PostgreSQL backup offering. pgBackRest can scale up for handling large database workloads and can help companies minimize storage requirements by using streaming compression. It uses delta restores to lower the amount of time required to complete a restore. Patroni, a high availability solution for PostgreSQL implementations can be used in production deployments. This list also includes additional extensions that are supported by the PostgreSQL Global Development Group. This new database distribution will provide users with enterprise support, services as well as consulting for their open-source database instances across multiple distributions, across on-premises and cloud deployments. The team further announced that Percona Monitoring and Management will now support PostgreSQL. Peter Zaitsev, co-founder, and CEO of Percona said, “Companies are creating more data than ever, and they have to store and manage this data effectively.” Zaitsev further added, “Open source databases are becoming the platforms of choice for many organizations, and Percona provides the consultancy and support services that these companies rely on to be successful. Adding a distribution of PostgreSQL alongside our current options for MySQL and MongoDB helps our customers leverage the best of open source for their applications as well as get reliable and efficient support.” To know more about Percona Distribution for PostgreSQL, check out the official page. Other interesting news in data Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations $100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons
Read more
  • 0
  • 0
  • 19446

article-image-six-topics-on-its-mind-for-scaling-analytics-next-year-from-whats-new
Anonymous
22 Dec 2020
5 min read
Save for later

Six topics on IT's mind for scaling analytics next year from What's New

Anonymous
22 Dec 2020
5 min read
Brian Matsubara RVP of Global Technology Alliances Kristin Adderson December 22, 2020 - 9:46pm December 23, 2020 We recently wrapped up participation in the all-virtual AWS re:Invent 2020 where we shared our experiences from scaling Tableau Public ten-fold this year. What an informative few weeks! It wasn’t surprising that the theme of scalability was mentioned throughout many sessions; as IT leaders and professionals, you’re working hard to support remote workforces and evolving business needs in our current situation. This includes offering broader access to data and analytics and embracing the cloud to better adapt, innovate, and grow more resilient while facing the unexpected. As you welcome more individuals to the promising world of modern BI, you must ensure systems and processes are equipped to support higher demand, and empower everyone in the organization to make the most of your data and analytics investments. Let’s take a closer look at what’s top of mind for IT to best enable the business while scaling your analytics program.  Supporting your data infrastructure Many organizations say remote work is here to stay, while new data and analytics use cases are constantly emerging to address the massive amounts of data that organizations collect. IT must enable an elastic environment where it's easier, faster, more reliable, and more secure to ingest, store, analyze, and share data among a dispersed workforce.  1. Deploying flexible infrastructure With benefits including greater flexibility and more predictable operating expenses, cloud-based infrastructure can help you get analytics pipelines up and running fast. And attractive, on-demand pricing makes it easier to scale resources up and down, supporting growing needs. If you're considering moving your organization’s on-premises analytics to the cloud, you can accelerate your migration and time to value by leveraging the resources and expertise of a strategic partner. Hear from Experian who is deploying and scaling its analytics in the cloud and recently benefited from this infrastructure.  Experian turned to Tableau and AWS for support powering its new Experian Safeguard dashboard, a free analytics tool that helps public organizations use data to pinpoint and protect vulnerable communities. Accessibility and scalability of the dashboard resulted in faster time to market and adoption by nearly 70 local authorities, emergency services, and charities now using “data for good.”  2. Optimizing costs According to IDC research, analytics spend in the cloud is growing eight times faster than other deployment types. You’ve probably purchased a data warehouse to meet the highest demand timeframes of the organization, but don’t need the 24/7 support that can result in unused capacity and wasted dollars. Monitor cloud costs and use patterns to make better operating, governance, and risk management decisions around your cloud deployment as it grows, and to protect your investment —especially when leadership is looking for every chance to maximize resources and keep spending streamlined. Supporting your people Since IT’s responsibilities are more and more aligned with business objectives—like revenue growth, customer retention, and even developing new business models—it’s critical to measure success beyond deploying modern BI technology. It’s equally important to empower the business to adopt and use analytics to discover opportunities, create efficiencies, and drive change. 3. Onboarding and license management As your analytics deployment grows, it's not scalable to have individuals submit one-off requests for software licenses that you then have to manually assign, configure, and track. You can take advantage of the groups you’ve already established in your identity and access management solution to automate the licensing process for your analytics program. This can also reduce unused licenses, helping lines of business to save a little extra budget.  4. Ensuring responsible use Another big concern as analytics programs grow is maintaining data security and governance in a self-service model. Fortunately, you can address this while streamlining user onboarding even further by automatically configuring user permissions based on their group memberships. Coupled with well-structured analytics content, you’ll not only reduce IT administrative work, but you’ll help people get faster, secure access to trusted data that matters most to their jobs. 5. Enabling access from anywhere When your organization is increasingly relying on data to make decisions, 24/7 support and access to customized analytics is business-critical. With secure, mobile access to analytics and an at-a-glance view of important KPIs, your users can keep a pulse on their business no matter where they are. 6. Growing data literacy When everyone in the organization is equipped and encouraged to explore, understand, and communicate with data, you’ll see amazing impact from more informed decision-making. But foundational data skills are necessary to get people engaged and using data and analytics properly. Customers have shown us creative and fun ways that IT helps build data literacy, from formal training to community-building activities. For example, St. Mary’s Bank holds regular Tableau office hours, is investing more time and energy in trainings, and has games that test employees on their Tableau knowledge.  Want to learn more?  If you missed AWS re:Invent 2020, you’re not out of luck! You can still register and watch on-demand content, including our own discussion of scaling Tableau Public tenfold to support customers and their growing needs for sharing COVID-19 data (featuring SVP of Product Development, Ellie Fields, and Director of Software Engineering, Jared Scott). You’ll learn about how we reacted to customer demands—especially from governments reporting localized data to keep constituents safe and informed during the pandemic—including shifts from on-premises to the cloud, hosting vizzes that could handle thousands, even millions, of daily hits. Data-driven transformation is an ongoing journey. Today, the organizations that are successfully navigating uncertainty are those leaning into data and analytics to solve challenges and innovate together. No matter where you are—evaluating, deploying, or scaling—the benefits of the cloud and modern BI are available to you. You can start by learning more about how we partner with AWS.
Read more
  • 0
  • 0
  • 19430
article-image-apples-september-2019-event-iphone-11-pro-and-pro-max-watch-series-5-apple-tv
Sugandha Lahoti
11 Sep 2019
6 min read
Save for later

Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, Apple TV+, new iPad and more

Sugandha Lahoti
11 Sep 2019
6 min read
Yesterday was a big day for Apple. Apple’s September event featured a number of new replacements of Apple’s already popular products including the new iPhone 11 (with triple cameras), Watch Series 5, Apple TV+ and a new iPad. In case you missed seeing the live update, we’ve got you covered with everything Apple announced at the Apple event for September 2019. Apple is releasing iOS 13 on September 19 as a software update for iPhone 6s models and later. Apple also said additional features will be available on September 30 with iOS 13.1, including improvements to AirDrop. iPhone 11 succeeds iPhone XR;  iPhone 11 Pro and Pro Max comes with triple cameras No doubt the most anticipated launch of the event, iPhone 11 was portrayed as the successor of iPhone XR. iPhone 11 comes with iOS 13 integration and two high-definition cameras and Night mode for photos. The dual-camera system lets users easily zoom between each camera while Audio Zoom matches the audio to the video framing for more dynamic sound. Users can easily record videos without switching out of Photo mode with QuickTake by simply holding the shutter button to start recording. It is powered by the A13 Bionic chip with all-day battery life. A12. The A13 Bionic is built for machine learning, with a faster Neural Engine for real-time photo and video analysis, and new Machine Learning Accelerators that allow the CPU to deliver more than 1 trillion operations per second. It has a 6.1-inch all-screen Liquid Retina display. iPhone 11 is water-resistant and comes in six colors including red, black, white, yellow, green and purple. iPhone 11 will get an update for Deep Fusion, coming later this fall, which is a new image processing system enabled by the Neural Engine of A13 Bionic. https://www.youtube.com/watch?v=H4p6njjPV_o iPhone 11 will be available for pre-order beginning Friday, September 13 and in stores beginning Friday, September 20, starting at $699 in the US, Puerto Rico, the US Virgin Islands and more than 30 other countries and regions. The iPhone 11 Pro and Pro Max come with a triple-camera system which provides a pro-level camera experience with an Ultra-Wide, Wide and Telephoto camera. The triple-camera system enables Portrait mode with a wider field of view, great for taking portraits of multiple people. The Telephoto camera features a larger ƒ/2.0 aperture to capture 40 percent more light compared to iPhone Xs for better photos and videos. https://www.youtube.com/watch?v=cVEemOmHw9Y However, not many are impressed with the aesthetics of the camera placement. https://twitter.com/9GAG/status/1171623152562200576 https://twitter.com/lytearr_/status/1171608034105155585 https://twitter.com/shrekpepeboii/status/1171629182901600256 The iPhone 11 Pro has a 5.8-inch OLED, and the Pro Max has a 6.5-inch OLED. They have a Super Retina XDR display, a custom-designed OLED with up to 1,200 nits brightness. It also comes with the A13 Bionic chip with iPhone 11 Pro offering up to four more hours of battery life in a day than iPhone XS, and iPhone 11 Pro Max offering up to five hours more than iPhone XS Max. iPhone 11 Pro and iPhone 11 Pro Max will be available in 64GB, 256GB and 512GB models in midnight green, space gray, silver, and gold starting at $999 and $1,099, respectively. Apple is also launching a new line of iPhone cases that come in a wide range of colors. Apple Watch Series 5 now works just like… your normal watch The new series of Apple Watch supports looks much like last year’s model, except that it supports the always-on display function. The series 5 dims the brightness, but it retains all of the same visuals you’d normally see while using it. This is different from how most smartwatches turn off the display to extend battery life. Though it has the same 18-hour battery life as the Series 4. You also have international emergency calling for added personal safety. New health features include Cycle Tracking, Noise app, and Activity Trends. The Apple Watch Series 5, has the Compass app to see the heading, incline, latitude, longitude, and current elevation. https://www.youtube.com/watch?v=5bvcyIV4yzo In addition, Apple is launching three new health studies: one for women’s health, one for hearing, and one for heart health. It’s partnering with major research institutions on each, and Apple Watch users can enroll through a forthcoming Apple Research app. Apple Watch Series 5 (GPS) starts at $399 and Apple Watch Series 5 (GPS + Cellular) starts at $499. This is the first Apple Watch to release with ceramic and titanium finishes. Sales start beginning Friday, September 20 in the US, Puerto Rico and 20 other countries and regions. You can order it from apple.com and in the Apple Store app. New 7th-Gen iPad now has a 10.2-inch display Apple’s new 7th-gen iPad is now upgraded from standard 9.7-inch display size to 10.2 inches. It also features the A10 Bionic processor and a new Smart Connector. It also provides support for Apple Pencil and the full-size Smart Keyboard. iPad starts at $329 for the Wi-Fi model and $459 for the Wi-Fi + Cellular model. Apple Arcade game subscription service Apple’s game subscription service, Apple Arcade will finally launch on September 30 on iPadOS and tvOS 13 and in October on macOS Catalina. The service will initially feature over 100 new, exclusive games, all playable across iPhone, iPad, iPod touch, Mac and Apple TV. To give users maximum flexibility when playing, some games will support controllers, including Xbox Wireless Controllers with Bluetooth, PlayStation DualShock 4 and MFi game controllers, in addition to touch controls and Siri Remote. Apple TV+ launches November 1 at $4.99 per month Apple’s flagship all-original video subscription service, Apple TV+ will be available at $4.99 per month, starting November 1.  This puts Apple in direct competition with Disney, whose subscription service Disney+ is available for $7 a month. Apple TV+ will offer a lineup of shows, movies, and documentaries, focusing on original content produced exclusively for the service. Apple will also include a year-long subscription to Apple TV Plus for free if you buy a new Apple product, including new iPads, iPhones, laptops, or desktops. If you are in a hurry here’s a 2 min video of the Apple Event: https://youtu.be/ZA3MV2V--TU Keep checking this space for more Apple coverage More news for Apple Apple Music is now available on your web browser Is Apple’s ‘Independent Repair Provider Program’ a bid to avoid the ‘Right To Repair’ bill Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability
Read more
  • 0
  • 0
  • 19411

article-image-ionic-4-1-named-hydrogen-is-out
Bhagyashree R
08 Mar 2019
2 min read
Save for later

Ionic 4.1 named Hydrogen is out!

Bhagyashree R
08 Mar 2019
2 min read
After releasing Ionic 4.0 in January this year, the Ionic team announced the release of Ionic  4.1 on Wednesday. This release is named “Hydrogen” based on the name of elements in the periodic table. Along with a few bugfixes, Ionic 4.1 comes with features like skeleton text update, indeterminate checkboxes, and more. Some of the new features in Ionic 4.1 Skeleton text update Using the ion-skeleton-text component, developers can now make showing skeleton screens for list items more natural. You can use ‘ion-skeleton-text’ inside media controls like ‘ion-avatar’ and ‘ion-thumbnail’. The size of skeletons placed inside of avatars and thumbnails will be automatically adjusted according to their containers. You can also style the skeletons to have custom border-radius, width, height, or any other CSS styles for use outside of Ionic components. Indeterminate checkboxes A new property named ‘indeterminate’ is now added to the ‘ion-checkbox’ component. When the value of ‘indeterminate’ is true it will show the checkbox in a half-on/half-off state. This property will be handy in cases where you are using a ‘check all’ checkbox, but only some of the options in the group are selected. CSS display utilities Ionic 4.1 comes with a few new CSS classes for hiding elements and responsive design: ion-hide and ion-hide-{breakpoint}-{dir}. To hide an element, you can use the ‘ion-hide’ class. You can use the ion-hide-{breakpoint}-{dir} classes to hide an element based on breakpoints for certain screen sizes. To know more about the other features in detail, visit Ionic's official website. Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular Ionic v4 RC released with improved performance, UI Library distribution and more The Ionic team announces the release of Ionic React Beta  
Read more
  • 0
  • 0
  • 19394
Modal Close icon
Modal Close icon