Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-ai-tools-assisting-with-mental-health-issues-brought-on-by-pandemic-from-ai-trends
Matthew Emerick
08 Oct 2020
5 min read
Save for later

AI Tools Assisting with Mental Health Issues Brought on by Pandemic  from AI Trends

Matthew Emerick
08 Oct 2020
5 min read
By Shannon Flynn, AI Trends Contributor   The pandemic is a perfect storm for mental health issues. Isolation from others, economic uncertainty, and fear of illness can all contribute to poor mental health — and right now, most people around the world face all three.  New research suggests that the virus is tangibly affecting mental health. Rates of depression and anxiety symptoms are much higher than normal. In some population groups, like students and young people, these numbers are almost double what they’ve been in the past.  Some researchers are even concerned that the prolonged, unavoidable stress of the virus may result in people developing long-term mental health conditions — including depression, anxiety disorders and even PTSD, according to an account in Business Insider. Those on the front lines, like medical professionals, grocery store clerks and sanitation workers, may be at an especially high risk.  Use of Digital Mental Health Tools with AI on the Rise   Automation is already widely used in health care, primarily in the form of technology like AI-based electronic health records and automated billing tools, according to a blog post from ZyDoc, a supplier of medical transcription applications. It’s likely that COVID-19 will only increase the use of automation in the industry. Around the world, medical providers are adopting new tech, like self-piloting robots that act as hospital nurses. These providers are also using UV light-based cleaners to sanitize entire rooms more quickly.  Digital mental health tools are also on the rise, along with fully automated AI tools that help patients get the care they need.   The AI-powered behavioral health platform Quartet, for example, is one of several automated tools that aim to help diagnose patients, screening them for common conditions like depression, anxiety, and bipolar spectrum disorders, according to a recent account in AI Trends. Other software — like a new app developed by engineers at the University of New South Wales in Sydney, Australia — can screen patients for different mental health conditions, including dementia. With a diagnosis, patients are better equipped to find the care they need, such as from mental health professionals with in-depth knowledge of a particular condition.   Another tool, an AI-based chatbot called Woebot, developed by Woebot Labs, Inc., uses brief daily chats to help people maintain their mental health. The bot is designed to teach skills related to cognitive behavioral therapy (CBT), a form of talk therapy that assists patients with identifying and managing maladaptive thought patterns.   In April, Woebot Labs updated the bot to provide specialized COVID-19-related support in the form of a new therapeutic modality, called Interpersonal Psychotherapy (IPT), which helps users “process loss and role transition,” according to a press release from the company.  Both Woebot and Quartet provide 24/7 access to mental health resources via the internet. This means that — so long as a person has an internet connection — they can’t be deterred by an inaccessible building or lengthy waitlist.  New AI Tools Supporting Clinicians   Some groups need more support than others. Clinicians working in hospitals are some of the most vulnerable to stress and anxiety. Right now, they’re facing long hours, high workloads, and frequent potential exposure to COVID.  Developers and health care professionals are also working together to create new AI tools that will support clinicians as they tackle the challenges of providing care during the pandemic.  Kavi Misri, founder and CEO of Rose One new AI-powered mental health platform, developed by the mobile mental health startup Rose, will gather real-time data on how clinicians are feeling via “questionnaires and free-response journal entries, which can be completed in as few as 30 seconds,” according to an account in Fierce Healthcare. The tool will scan through these responses, tracking the clinician’s mental health and stress levels. Over time, it should be able to identify situations and events likely to trigger dips in mental health or increased anxiety and tentatively diagnose conditions like depression, anxiety, and trauma.  Front-line health care workers are up against an unprecedented challenge, facing a wave of new patients and potential exposure to COVID, according to Kavi Misri, founder and CEO of Rose. As a result, many of these workers may be more vulnerable to stress, anxiety and other mental health issues.   “We simply can’t ignore this emerging crisis that threatens the mental health and stability of our essential workers – they need support,” stated Misri.  Rose is also providing clinicians access to more than 1,000 articles and videos on mental health topics. Each user’s feed of content is curated based on the data gathered by the platform.  Right now, Brigham and Women’s Hospital, the second-largest teaching hospital at Harvard, is experimenting with the technology in a pilot program. If effective, the tech could soon be used around the country to support clinicians on the front lines of the crisis.  Mental health will likely stay a major challenge for as long as the pandemic persists. Fortunately, AI-powered experimental tools for mental health should help to manage the stress, depression and trauma that has developed from dealing with COVID-19.  Read the source articles and information in Business Insider, a blog post from ZyDoc, in AI Trends,  press release from Woebot Labs, and in Fierce Healthcare.   Shannon Flynn is a managing editor at Rehack, a website featuring coverage of a range of technology niches. 
Read more
  • 0
  • 0
  • 18453

Matthew Emerick
13 Oct 2020
3 min read
Save for later

.NET Framework republishing of July 2020 Security Only Updates from .NET Blog

Matthew Emerick
13 Oct 2020
3 min read
Today, we are republishing the July 2020 Security Only Updates for .NET Framework to resolve a known issue that affected the original release.  You should install this version (V2) of the update as part of your normal security routine. Security CVE-2020-1147– .NET Framework Remote Code Execution Vulnerability A remote code execution vulnerability exists in .NET Framework when the software fails to check the source markup of XML file input. An attacker who successfully exploited the vulnerability could run arbitrary code in the context of the process responsible for deserialization of the XML content. To exploit this vulnerability, an attacker could upload a specially crafted document to a server utilizing an affected product to process content. The security update addresses the vulnerability by correcting how .NET Framework validates the source markup of XML content. This security update affects how .NET Framework’s System.Data.DataTable and System.Data.DataSet types read XML-serialized data. Most .NET Framework applications will not experience any behavioral change after the update is installed. For more information on how the update affects .NET Framework, including examples of scenarios which may be affected, please see the DataTable and DataSet security guidance document. To learn more about the vulnerabilities, go to the following Common Vulnerabilities and Exposures (CVE). CVE-2020-1147 Known Issues This release resolves the known issue below. Symptoms: After you apply this update, some applications experience a TypeInitializationException exception when they try to deserialize System.Data.DataSet or System.Data.DataTable instances from the XML within a SQL CLR stored procedure. The stack trace for this exception appears as follows: System.TypeInitializationException: The type initializer for ‘Scope’ threw an exception. —> System.IO.FileNotFoundException: Could not load file or assembly ‘System.Drawing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. The system cannot find the file specified. at System.Data.TypeLimiter.Scope.IsTypeUnconditionallyAllowed(Type type) at System.Data.TypeLimiter.Scope.IsAllowedType(Type type) at System.Data.TypeLimiter.EnsureTypeIsAllowed(Type type, TypeLimiter capturedLimiter) Resolution: Install the latest version of this update that was released on October 13th, 2020. Getting the Update The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog. Microsoft Update Catalog You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update, Windows Server Update Services, Microsoft Update Catalog. Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update. **Note**: Customers that rely on Windows Update and Windows Server Update Services will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply. The following table is for earlier Windows and Windows Server versions. Product Version Security Only Update Windows 8.1, Windows RT 8.1 and Windows Server 2012 R2 4566468 .NET Framework 3.5 Catalog 4565580 .NET Framework 4.5.2 Catalog 4565581 .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4565585 .NET Framework 4.8 Catalog 4565588 Windows Server 2012 4566467 .NET Framework 3.5 Catalog 4565577 .NET Framework 4.5.2 Catalog 4565582 .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4565584 .NET Framework 4.8 Catalog 4565587 Windows 7 SP1 and Windows Server 2008 R2 SP1 4566466 .NET Framework 3.5.1 Catalog 4565579 .NET Framework 4.5.2 Catalog 4565583 .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4565586 .NET Framework 4.8 Catalog 4565589 Windows Server 2008 4566469 .NET Framework 2.0, 3.0 Catalog 4565578 .NET Framework 4.5.2 Catalog 4565583 .NET Framework 4.6 Catalog 4565586   The post .NET Framework republishing of July 2020 Security Only Updates appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 18441

article-image-google-releases-magenta-studio-beta-an-open-source-python-machine-learning-library-for-music-artists
Melisha Dsouza
14 Nov 2018
3 min read
Save for later

Google releases Magenta studio beta, an open source python machine learning library for music artists

Melisha Dsouza
14 Nov 2018
3 min read
On 11th November, the Google Brain Team released Magenta studio in beta, a suite of free music-making tools using their machine learning models. It is a collection of music plugins built on Magenta’s open source tools and models. These tools are available both as standalone Electron applications as well as plugins for Ableton Live. What is Project Magenta? Magenta is a research project which was started by some researchers and engineers from the Google Brain team with significant contributions from many other stakeholders. The project explores the role of machine learning in the process of creating art and music. It primarily involves developing new deep learning and reinforcement learning algorithms to generate songs, images, drawings, and other materials. It also explores the possibility of building smart tools and interfaces to allow artists and musicians to extend their processes using these models. Magenta is powered by TensorFlow and is distributed as an open source Python library. This library allows users to manipulate music and image data which can then be used to train machine learning models. They can generate new content from these models. The project aims to demonstrate that machine learning can be utilized to enable and enhance the creative potential of all people. If the Magenta studio is used via Ableton, the Ableton Live plugin reads and writes clips from Ableton's Session View. If a user chooses to run the studio as a standalone application, the standalone application reads and writes files from a users file system without requiring Ableton. Some of the demos include: #1 Piano Scribe Many of the generative models in Magenta.js requires the input to be a symbolic representation like Musical Instrument Digital Interface (MIDI). But now, Magenta Converts raw audio to MIDI using Onsets and Frames which  a neural network trained for polyphonic piano transcription. This means that only audio is enough to obtain an output of MIDI in the browser. #2 Beat Blender The Beat Bender is built by Google Creative Lab using MusicVAE. Users can now generate two dimensional palettes of drum beats and draw paths through the latent space to create evolving beats. #3 Tenori-of Users can utilize the Magenta.js to generate drum patterns when they hit the “Improvise” button. This is more like a take on an electronic sequencer. #4 NSynth Super This is machine learning algorithm using deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds. For instance, users can get a sound that’s part flute and part sitar all at once. You can head over to the Magenta Blog for more exciting demos. Alternatively, head over to magenta.tensorflow.org to read more about this announcement. Worldwide Outage: YouTube, Facebook, and Google Cloud go down affecting thousands of users Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more
Read more
  • 0
  • 0
  • 18438

article-image-apple-releases-safari-13-with-dark-mode-support-fido2-compliant-usb-security-keys-support
Bhagyashree R
20 Sep 2019
3 min read
Save for later

Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support, and more!

Bhagyashree R
20 Sep 2019
3 min read
Yesterday, Apple released Safari 13 for iOS 13, macOS 10.15 (Catalina), macOS Mojave, and macOS High Sierra. This release comes with opt-in dark mode support, FIDO2-compliant USB security keys support, updated Intelligent Tracking Prevention, and much more. Key updates in Safari 13 Desktop-class browsing for iPad users Starting with Safari 13, iPad users will have the same browsing experience as macOS users. In addition to displaying websites same as the desktop Safari, it will also provide the same capabilities including more keyboard shortcuts, a download manager with background downloads, and support for top productivity websites. Updates related to authentication and passwords Safari 13 will prompt users to strengthen their passwords when they sign into a website. On macOS, users will able to use FIDO2-compliant USB security keys in Safari. Also, support is added for “Sign in With Apple” to Safari and WKWebView. Read also: W3C and FIDO Alliance declare WebAuthn as the web standard for password-free logins Security and privacy updates A new permission API is added for DeviceMotionEvent and DeviceOrientationEvent on iOS. The DeviceMotionEvent class encapsulates details like the measurements of the interval, rotation rate, and acceleration of a device. Whereas, the DeviceOrientationEvent class encapsulates the angles of rotation (alpha, beta, and gamma) in degrees and heading. Other updates include updated third-party iframes to prevent them from automatically navigating the page. Intelligent Tracking Prevention is updated to prevent cross-site tracking through referrer and link decoration. Performance-specific updates While using Safari 13, iOS users will find that the initial rendering time for web pages is reduced. The memory consumption by JavaScript including for non-web clients is also reduced. WebAPI updates Safari 13 comes with a new Pointer Events API to enable consistent access to mouse, trackpad, touch, and Apple Pencil events. It also supports the Visual Viewport API that adjusts web content to avoid overlays, such as the onscreen keyboard. Deprecated features in Safari 13 WebSQL and Legacy Safari Extensions are no longer supported. To replace your previously provided Legacy Safari Extensions, Apple provides two options. First, you can configure your Safari App Extension to provide an upgrade path that will automatically remove the previous Legacy Safari Extension when it is installed. Second, you can manually convert your Legacy Safari Extension to a Safari App Extension. In a discussion on Hacker News, users were pleased with the support for the Pointer Events API. A user commented, “The Pointer Events spec is a real joy. For example, if you want to roll your own "drag" event for a given element, the API allows you to do this without reference to document or a parent container element. You can just declare that the element currently receiving pointer events capture subsequent pointer events until you release it. Additionally, the API naturally lends itself to patterns that can easily be extended for multi-touch situations.” Others also expressed their concern regarding the deprecation of Legacy Safari Extensions. A user added, “It really, really is a shame that they removed proper extensions. While Safari never had a good extension story, it was at least bearable, and in all other regards its simply the best Mac browser. Now I have to take a really hard look at switching back to Firefox, and that would be a downgrade in almost every regard I care about. Pity.” Check out the official release notes of Safari 13 to know more in detail. Other news in web development New memory usage optimizations implemented in V8 Lite can also benefit V8 5 pitfalls of React Hooks you should avoid – Kent C. Dodds Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users
Read more
  • 0
  • 0
  • 18437

article-image-researchers-propose-a-reinforcement-learning-method-that-can-hack-google-recaptcha-v3
Natasha Mathur
16 Apr 2019
3 min read
Save for later

Researchers propose a reinforcement learning method that can hack Google reCAPTCHA v3

Natasha Mathur
16 Apr 2019
3 min read
A team of researchers, namely, Ismail Akrout, Amal Feriani, and Mohamed Akrout, published a paper, titled ‘Hacking Google reCAPTCHA v3 using Reinforcement Learning’, last month. In the paper, researchers present a Reinforcement Learning (RL) method that can easily bypass Google reCAPTCHA v3. Google’s reCAPTCHA system is used for detection of bots from humans and is the most used defense mechanism. It’s used to protect the sites from automated agents and bots, attacks and spams. Google’s reCAPTCHA v3 makes use of machine learning to return a risk assessment score between 0.0 and 1.0. This score is used to characterize the trustability of the user. If a score is close to 1.0 then that means the user is human, if not, then it’s a bot. Method Used The problem has been formulated as a grid world in which the agents can learn the movement of the mouse and click on the reCAPTCHA button to receive a high score. The performance of the agent is studied on varying the cell size of the world. The paper shows that the performance drops when the agent takes big steps toward the goal. Finally, a divide and conquer strategy is used to defeat the reCAPTCHA system for any grid resolution. Researchers have produced a plausible formalization of the problem as a Markov Decision Process (MDP) that can be solved using advanced RL algorithms. Then, a new environment is introduced that simulates the user experience with websites that have reCAPTCHA system enabled. Finally, it is analyzed how RL agents learn or fail to defeat Google reCAPTCHA.   In order to pass the reCAPTCHA test, a human user is required to move the mouse starting from an initial position then perform a sequence of steps until the user reaches the reCAPTCHA check-box and clicks on it. Based on how the interaction goes, the reCAPTCHA system rewards the user with a score.   As shown in the figure, the point where the mouse is the starting point and goal is the position of reCAPTCHA. A grid is constructed where all the pixels between these two points is a possible position for the mouse. It is assumed in the paper that a normal user will not necessarily move the mouse pixel by pixel, hence, cell size is defined that refers to the number of pixels between these two consecutive positions.                                         Agent’s mouse movement After this, a browser page will be opened at each episode with the user mouse at a random position. The agent then takes in a sequence of actions until it reaches the reCAPTCHA or the time limit. Once the episode is complete, the user will receive a feedback of the reCAPTCHA algorithm as any normal human user would. Results Researchers trained a Reinforce agent on a grid world of a specific size. The results presented in the paper are success rates across different 1000 runs. For the experiment to be successful, the agent would have to defeat the reCAPTCHA and obtain a score of 0.9. As per the results of the experiment, the discount factor achieved was 0.99, thereby, successfully defeating the reCAPTCHA. “Our proposed method achieves a success rate of 97.4% on a 100 × 100 grid and 96.7% on a 1000 × 1000 screen resolution”, states the researchers. For more information, check out the official research paper. Google researchers propose building service robots with reinforcement learning to help people with mobility impairment Facebook researchers show random methods without any training can outperform modern sentence embeddings models for sentence classification Researchers release unCaptcha2, a tool that uses Google’s speech-to-text API to bypass the reCAPTCHA audio challenge
Read more
  • 0
  • 0
  • 18433

article-image-amazon-s3-update-three-new-security-access-control-features-from-aws-news-blog
Matthew Emerick
02 Oct 2020
5 min read
Save for later

Amazon S3 Update – Three New Security & Access Control Features from AWS News Blog

Matthew Emerick
02 Oct 2020
5 min read
A year or so after we launched Amazon S3, I was in an elevator at a tech conference and heard a couple of developers use “just throw it into S3” as the answer to their data storage challenge. I remember that moment well because the comment was made so casually, and it was one of the first times that I fully grasped just how quickly S3 had caught on. Since that launch, we have added hundreds of features and multiple storage classes to S3, while also reducing the cost to storage a gigabyte of data for a month by almost 85% (from $0.15 to $0.023 for S3 Standard, and as low as $0.00099 for S3 Glacier Deep Archive). Today, our customers use S3 to support many different use cases including data lakes, backup and restore, disaster recovery, archiving, and cloud-native applications. Security & Access Control As the set of use cases for S3 has expanded, our customers have asked us for new ways to regulate access to their mission-critical buckets and objects. We added IAM policies many years ago, and Block Public Access in 2018. Last year we added S3 Access Points (Easily Manage Shared Data Sets with Amazon S3 Access Points) to help you manage access in large-scale environments that might encompass hundreds of applications and petabytes of storage. Today we are launching S3 Object Ownership as a follow-on to two other S3 security & access control features that we launched earlier this month. All three features are designed to give you even more control and flexibility: Object Ownership – You can now ensure that newly created objects within a bucket have the same owner as the bucket. Bucket Owner Condition – You can now confirm the ownership of a bucket when you create a new object or perform other S3 operations. Copy API via Access Points – You can now access S3’s Copy API through an Access Point. You can use all of these new features in all AWS regions at no additional charge. Let’s take a look at each one! Object Ownership With the proper permissions in place, S3 already allows multiple AWS accounts to upload objects to the same bucket, with each account retaining ownership and control over the objects. This many-to-one upload model can be handy when using a bucket as a data lake or another type of data repository. Internal teams or external partners can all contribute to the creation of large-scale centralized resources. With this model, the bucket owner does not have full control over the objects in the bucket and cannot use bucket policies to share objects, which can lead to confusion. You can now use a new per-bucket setting to enforce uniform object ownership within a bucket. This will simplify many applications, and will obviate the need for the Lambda-powered self-COPY that has become a popular way to do this up until now. Because this setting changes the behavior seen by the account that is uploading, the PUT request must include the bucket-owner-full-control ACL. You can also choose to use a bucket policy that requires the inclusion of this ACL. To get started, open the S3 Console, locate the bucket and view its Permissions, click Object Ownership, and Edit: Then select Bucket owner preferred and click Save: As I mentioned earlier, you can use a bucket policy to enforce object ownership (read About Object Ownership and this Knowledge Center Article to learn more). Many AWS services deliver data to the bucket of your choice, and are now equipped to take advantage of this feature. S3 Server Access Logging, S3 Inventory, S3 Storage Class Analysis, AWS CloudTrail, and AWS Config now deliver data that you own. You can also configure Amazon EMR to use this feature by setting fs.s3.canned.acl to BucketOwnerFullControl in the cluster configuration (learn more). Keep in mind that this feature does not change the ownership of existing objects. Also, note that you will now own more S3 objects than before, which may cause changes to the numbers you see in your reports and other metrics. AWS CloudFormation support for Object Ownership is under development and is expected to be ready before AWS re:Invent. Bucket Owner Condition This feature lets you confirm that you are writing to a bucket that you own. You simply pass a numeric AWS Account ID to any of the S3 Bucket or Object APIs using the expectedBucketOwner parameter or the x-amz-expected-bucket-owner HTTP header. The ID indicates the AWS Account that you believe owns the subject bucket. If there’s a match, then the request will proceed as normal. If not, it will fail with a 403 status code. To learn more, read Bucket Owner Condition. Copy API via Access Points S3 Access Points give you fine-grained control over access to your shared data sets. Instead of managing a single and possibly complex policy on a bucket, you can create an access point for each application, and then use an IAM policy to regulate the S3 operations that are made via the access point (read Easily Manage Shared Data Sets with Amazon S3 Access Points to see how they work). You can now use S3 Access Points in conjunction with the S3 CopyObject API by using the ARN of the access point instead of the bucket name (read Using Access Points to learn more). Use Them Today As I mentioned earlier, you can use all of these new features in all AWS regions at no additional charge. — Jeff;  
Read more
  • 0
  • 0
  • 18431
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-next-js-8-releases-with-a-serverless-mode-better-build-time-memory-usage-and-more
Bhagyashree R
12 Feb 2019
3 min read
Save for later

Next.js 8 releases with a serverless mode, better build-time memory usage, and more

Bhagyashree R
12 Feb 2019
3 min read
After releasing Next.js 7 in September last year, the team behind Next.js released the production-ready Next.js 8, yesterday. This release comes with a serverless mode, build-time memory usage reduction, prefetch performance improvements, security improvements, and more. Similar to previous releases, all the updates are backward compatible. The following are some of the updates Next.js 8 comes with: Serverless mode The serverless deployment comes with various benefits including more reliability, scalability, and separation of concerns by splitting an application into smaller parts. These smaller parts are also called as lambdas. To provide these benefits of serverless deployment to Next.js users, this version comes with a serverless mode in which each page in the ‘page’ directory will be treated as a lambda. It will also come with low-level APIs for implementing serverless deployment. Better build-time memory usage The Next.js team, with the Webpack team, has worked towards improving the build performance and resource utilization of Next.js and Webpack. This collaboration has resulted in up to 16 times better memory usage with no degradation in performance. This improvement ensures that memory gets released much more quickly and no processes crash under stress. Prefetch performance improvements Next.js supports prefetching pages for faster navigation. Earlier, users were required to inject a ‘script’ tag into the document ‘body’, which caused an overhead while opening pages. In Next.js 8, the ‘prefetch’ attribute uses link rel=”preload” instead of a 'script' tag. Now the prefetching start after onload to allow the browser to manage resources. In addition to removing the overhead, this version also disables prefetch on slower network connections by detecting 2G internet and navigator.connection.saveData mode. Security improvements In this version, a new ‘crossOrigin’ config option is introduced to ensure that all ‘script’ tags have the ‘cross-origin’ set. Also, with this new config option, you do not require ‘pages/_document.js’ to set up cross-origin in your application. Another security improvement includes removing the inline JavaScript. In previous versions, users were required to include script-src 'unsafe-inline' in their policy to enable Content Security Policy. This was done because Next.js was creating an inline ‘script’ tag to pass data. In this version, the inline script tag is changed to a JSON tag for safe transfer to the client. This essentially means Next.js no longer includes no inline scripts anymore. To read about other updates introduced in Next.js 8, check out its official announcement. Next.js 7, a framework for server-rendered React applications, releases with support for React context API and Webassembly 16 JavaScript frameworks developers should learn in 2019 Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more!
Read more
  • 0
  • 0
  • 18431

article-image-apple-pay-will-soon-support-nfc-tags-to-trigger-payments
Vincy Davis
14 May 2019
3 min read
Save for later

Apple Pay will soon support NFC tags to trigger payments

Vincy Davis
14 May 2019
3 min read
At the beginning of this month, Apple’s Vice President of Apple Pay, Jennifer Bailey announced a new NFC feature for Apple Pay. Now, Apple Pay will be supported with NFC stickers/tags, that will trigger it for payment without needing to have an app installed. This announcement was made during the keynote address at the TRANSACT Conference, Las Vegas, which focused on global payment technology. The new iPhones will have special NFC tags that will trigger Apple Pay purchases when tapped. This means all you need to do is tap on the NFC tag, confirm the purchase through Apple Pay(through Face ID or Touch ID) and the payment would be done. This will require no separate app and will be handled by Apple Pay along with the Wallet app. As per 9to5Mac, Apple is partnering with Bird scooters, Bonobos clothing store, and PayByPhone parking meters in the initial round. Also, users will soon be able to sign up for loyalty cards within the Wallet app, with a single tap with no third party or setup required. According to NFC World, Dairy Queen, Panera Bread, Yogurtland, Jimmy John's, Dave & Busters, and Caribou Coffee are all planning to launch services later this year that will use NFC tags allowing customers to sign up for loyalty cards. https://twitter.com/SteveMoser/status/1127949077432426496 This could be another step towards Apple’s goal of replacing the wallet. This feature will make instant and on the go purchases a lot more faster and easier. A user on Reddit has commented, “From a user's point of view, this seems great. No need to wait for congested LTE to download an app in order to pay for a scooter or parking.” Another user is comparing Apple Pay with QR code, stating “QR code requires at least one more step which is using the camera. Hopefully, Apple Pay will be just a single tap and confirm, which would be invoked automatically whenever the phone is near a point of sale. And since the NFC tags will have a predetermined, set payment amount associated with them, even biometrics shouldn’t be necessary.” https://twitter.com/lgeffen/status/1128083948410744832 More details on this feature can be expected at the Apple Worldwide Developers Conference (WWDC) 2019 (WWDC19) coming up in June. Apple’s March Event: Apple changes gears to services, is now your bank, news source, gaming zone, and TV Spotify files an EU antitrust complaint against Apple; Apple says Spotify’s aim is to make more money off others’ work Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws
Read more
  • 0
  • 0
  • 18405

article-image-tensorflow-js-architecture-and-applications
Bhagyashree R
05 Feb 2019
4 min read
Save for later

TensorFlow.js: Architecture and applications

Bhagyashree R
05 Feb 2019
4 min read
In a paper published last month, Google developers explained the design, API, and implementation of TensorFlow.js, the JavaScript implementation of TensorFlow. TensorFlow.js was first introduced at the TensorFlow Dev Summit 2018. It is basically the successor of deeplearn.js, which was released in August 2017, and is now named as TensorFlow.js Core. Google’s motivation behind creating TensorFlow.js was to bring machine learning in the hands of web developers who generally do not have much experience with machine learning. It also aims at allowing experienced ML users and teaching enthusiasts to easily migrate their work to JS. The TensorFlow.js architecture TensorFlow.js, as the name suggests, is based on TensorFlow, with a few exceptions specific to the JS environment. This library comes with the following two sets of APIs: The Ops API facilitates lower-level linear algebra operations such as matrix, multiplication, tensor addition, and so on. The Layers API, similar to the Keras API, provide developers high-level model building blocks and best practices with emphasis on neural networks. Source: TensorFlow.js TensorFlow.js backends In order to support device-specific kernel implementations, TensorFlow.js has a concept of backends. Currently it supports three backends: the browser, WebGL, and Node.js. The two new rising web standards, WebAssembly and WebGPU, will also be supported as a backend by TensorFlow.js in the future. To utilize the GPU for fast parallelized computations, TensorFlow.js relies on WebGL, a cross-platform web standard that provides low-level 3D graphics APIs. Among the three TensorFlow.js backends, the WebGL backend has the highest complexity. With the introduction of Node.js and event-driven programming, the use of JS in server-side applications has grown over time. Server-side JS has full access to the filesystem, native operating system kernel, and existing C and C++ libraries. In order to support the server-side use cases of machine learning in JavaScript, TensorFlow.js comes with a Node.js backend that binds to the official TensorFlow C API using the N-API. As a fallback, TensorFlow.js provides a slower CPU implementation in plain JS. This fallback can run in any execution environment and is automatically used when the environment has no access to WebGL or the TensorFlow binary. Current applications of TensorFlow.js Since its launch, TensorFlow.js have seen its applications in various domains. Here are some of the interesting examples the paper lists: Gestural Interfaces TensorFlow.js is being used in applications that take gestural inputs with the help of webcam. Developers are using this library to build applications that translate sign language to speech translation, enable individuals with limited motor ability control a web browser with their face, and perform real-time facial recognition and pose-detection. Research dissemination The library has facilitated ML researchers to make their algorithms more accessible to others. For instance, the Magenta.js library, developed by the Magenta team, provides in-browser access to generative music models. Porting to the web with TensorFlow.js has increased the visibility of their work with their audience, namely musicians. Desktop and production applications In addition to web development, JavaScript has been used to develop desktop and production applications. Node Clinic, an open source performance profiling tool, recently integrated a TensorFlow.js model to separate CPU usage spikes by the user from those caused by Node.js internals. Another example is, Mood.gg Desktop, which is a desktop application powered by Electron, a popular JavaScript framework for writing cross-platform desktop apps. With the help of TensorFlow.js, Mood.gg detects which character the user is playing in the game called Overwatch, by looking at the user’s screen. It then plays a custom soundtrack from a music streaming site that matches with the playing style of the in-game character. Read the paper, Tensorflow.js: Machine Learning for the Web and Beyond, for more details. TensorFlow.js 0.11.1 releases! Emoji Scavenger Hunt showcases TensorFlow.js 16 JavaScript frameworks developers should learn in 2019
Read more
  • 0
  • 0
  • 18404

article-image-googles-home-security-system-nest-secures-had-a-hidden-microphone-google-says-it-was-an-error
Melisha Dsouza
21 Feb 2019
2 min read
Save for later

Google’s home security system, Nest Secure’s had a hidden microphone; Google says it was an “error”

Melisha Dsouza
21 Feb 2019
2 min read
Earlier this month, Google upgraded its home security and alarm system, Nest Secure to work with its Google Assistant. This meant that Nest Secure customers would be able to perform tasks like asking Google about the weather. The device came with a microphone for this purpose, without it being mentioned on the device’s published specifications. On Tuesday, a Google spokesperson got in touch with Business Insider and told them that the miss was an “error” on their part. “The on-device microphone was never intended to be a secret and should have been listed in the tech specs. Further, the Nest team added that the microphone has “never been on” and is activated only when users specifically enable the option. As an explanation as to why the microphone was installed in the devices, the team said that it was in order to support future features “such as the ability to detect broken glass.” Before sending over an official statement to Business Insider, the Nest team replied to a similar concern from a user on Twitter, in early February. https://twitter.com/treaseye/status/1092507172255289344 Scott Galloway, professor of marketing at the New York University Stern School of Business, has expressed strong sentiments regarding this news on Twitter https://twitter.com/profgalloway/status/1098228685155508224 Users have even accused Google of “pretending the mistake happened” and slammed Google over such an error. https://twitter.com/tshisler/status/1098231070275686400 https://twitter.com/JoshConstine/status/1098086028353720320   Apart from Google, there have also been multiple cases in the past of Amazon Alexa and Google home listening to people’s conversations, thus invading privacy. Earlier this year, a family in Portland, discovered that its Alexa-powered Echo device had recorded their private conversation and sent it to a random person in their contacts list. Google’s so-called “error” can lead to a drop in the number of customers buying its home security system as well as a drop in the trust users place  in Google’s products. It is high time Google starts thinking along the line of security standards and integrity maintained in its products. Amazon’s Ring gave access to its employees to watch live footage of the customers, The Intercept reports Email and names of Amazon customers exposed due to ‘technical error’; number of affected users unknown Google Home and Amazon Alexa can no longer invade your privacy; thanks to Project Alias!  
Read more
  • 0
  • 0
  • 18398
article-image-valve-reveals-new-index-vr-kit-with-detail-specs-and-costs-upto-999
Fatema Patrawala
02 May 2019
4 min read
Save for later

Valve reveals new Index VR Kit with detail specs and costs upto $999

Fatema Patrawala
02 May 2019
4 min read
Valve introduced the new VR headset kit, Valve Index, only one month ago. And said the preorders will begin from, May 1st, and will ship in June. Today, Valve is fully detailing the Index headset for the first time, and revealing exactly how much it will cost: $999. The price seems to be relatively high according to today’s VR headset standards. In comparison, Facebook announced the Oculus Quest and Oculus Rift S to be shipped on May 21st for $399. But Valve says it will let you buy parts piecemeal if you need, which is good deal if you do not wish to buy the whole kit. And if you’ve already got a Vive or Vive Pro and / or don’t need the latest Knuckles controllers, you won’t necessarily need to spend that whole $999 to get started. Get the best look yet of the Index headset at the Valve Index website. Like the HTC Vive, which was co-designed with Valve, the Index will still be a tethered experience with a 5-meter cable that plugs into a gaming PC. It also uses the company’s laser-firing Lighthouse base stations to figure out where the headset is at any given time. That’s how it lets you walk around a room worth of space in VR — up to a huge 10 x 10 meter room. Valve’s not using cameras for inside-out tracking; the company says the twin stereo RGB cameras here are designed for passthrough (letting you see the real world through the headset) and for the computer vision community. Instead, Valve says the Index’s focus is on delivering the highest fidelity VR experience possible, meaning improved lenses, screens, and audio. In this case it actually includes a pair of 1440 x 1600-resolution RGB LCDs, rather than the higher-res OLED screens much of which the competition is already using. But Valve says its screens run faster — 120Hz, with an experimental 144Hz mode — and are better at combating the “screen door effect” and blurry when you move your head, persistence issues that first-gen VR headsets struggled with. The Valve Index also has an IPD slider to adjust for the distance between your eyes and lenses that Valve says offer a 20-degree larger field of view than the HTC Vive “for typical users.” Most interesting in Valve are the built-in headphone images shown on the website which aren’t actually headphones — but they’re speakers. And they are designed to not touch your ears, instead firing their sound toward your head. It is similar to how Microsoft’s HoloLens visors produce audio, which means that while people around you could theoretically hear what you’re doing, there’ll be less fiddling with the mechanism to get that audio aligned with your ears. They have also provided a 3.5mm headphone jack if you want to plug in your own headphones. Another interesting part of the Valve Index is it can be purchased separately for $279. The Valve Index Controllers, formerly known as Knuckles, might be the most intuitive way to get your hands into VR yet. While a strap holds the controller to your hand, 87 sensors track the position of your hands and fingers and even how hard you’re pressing down. Theoretically, you could easily reach, grab, and throw virtual objects with such a setup, something that wasn’t really possible with the HTC Vive or Oculus Touch controllers. Here’s one gameplay example that Valve is showing off: Source - Valve website Another small improvement is the company’s Lighthouse base stations. Since they only use a single laser now, and no IR blinker, Valve says they play nicer with other IR devices, which mean you can turn on and off TV without needing to power off them first. According to the reports by Polygon which got an early hands-on with the Valve Index, they say the Knuckles feel great, the optics are sharp, and that it may be the most comfortable way to wear a VR headset over a pair of glasses yet. Polygon also further explained the $999 price point. They said, during Valve’s demonstration, a spokesperson said that Index is the sort of thing that is likely to appeal to a virtual reality enthusiast who (a) must have the latest thing and (b) enjoys sufficient disposable income to satisfy that desire. It’s an interesting contrast with Facebook’s strategy for Rift, which is pushing hard for the price tipping point when VR suddenly becomes a mass-market thing, like smartphones did a decade ago. Get to know about pricing details of Valve Index kit on its official page. Top 7 tools for virtual reality game developers Game developers say Virtual Reality is here to stay Facebook releases DeepFocus, an AI-powered rendering system to make virtual reality more real
Read more
  • 0
  • 0
  • 18381

article-image-you-can-now-use-fingerprint-or-screen-lock-instead-of-passwords-when-visiting-certain-google-services-thanks-to-fido2-based-authentication
Sugandha Lahoti
13 Aug 2019
2 min read
Save for later

You can now use fingerprint or screen lock instead of passwords when visiting certain Google services thanks to FIDO2 based authentication

Sugandha Lahoti
13 Aug 2019
2 min read
Google has announced a FIDO2 based local user verification for Google Accounts, for a simpler authentication experience when viewing saved passwords for a website. Basically, you can now use fingerprint or screen lock instead of passwords when visiting certain Google services. This password-free authentication service will leverage the FIDO2 standards, FIDO CTAP, and WebAuthn, which is designed to “provide simpler and more secure authentication experiences. They are a result of years of collaboration between Google and many other organizations in the FIDO Alliance and the W3C” according to a blog post from the company. This new authentication process is designed to speed up the process of logging into Google accounts as well as being more secure by replacing the password typing system with a direct biometric authentication system. How this works is that if you tap on any one of your saved passwords on passwords.google.com, then Google will prompt you to "Verify that it’s you," at which point, you can authenticate using your fingerprint or any other method you usually use to unlock your phone (such as using a pin number or a touch pattern). Google has not yet made it clear which Google services could be used by the biometric method; the blog post cited Google's online Password Manager, as the example. Source: Google Google is also being cautious about data privacy, noting, “Your fingerprint is never sent to Google's servers - it is securely stored on your device, and only a cryptographic proof that you've correctly scanned it is sent to Google's servers. This is a fundamental part of the FIDO2 design. This sign-in feature is currently available on all Pixel devices. It will be made available to all Android phones running 7.0 Nougat or later "over the next few days.  Google Titan Security key with secure FIDO two factor authentication is now available for purchase Google to provide a free replacement key for its compromised Bluetooth Low Energy (BLE) Titan Security Keys Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users
Read more
  • 0
  • 0
  • 18379

article-image-react-devtools-4-0-releases-with-support-for-hooks-experimental-suspense-api-and-more
Bhagyashree R
16 Aug 2019
3 min read
Save for later

React DevTools 4.0 releases with support for Hooks, experimental Suspense API, and more!

Bhagyashree R
16 Aug 2019
3 min read
Yesterday, the React team announced the release of React DevTools 4.0 for Chrome, Firefox, and Edge. In addition to better performance and navigation experience, this release fully supports React Hooks and provides a way to test the experimental Suspense API. Key updates in React DevTools 4.0 Better performance by reducing the “bridge traffic” The React DevTools extension is made up of two parts: frontend and backend. The frontend portion includes the components tree, the Profiler, and all the other things that are visible to you. On the other hand, the backend portion is the one that is invisible. This portion is in charge of notifying the frontend by sending messages through a “bridge”. In previous versions of React DevTools, the traffic caused by this notification process was one of the biggest performance bottlenecks. Starting with React DevTools 4.0, the team has tried to reduce this bridge traffic by minimizing the amount of messages sent by the backend to render the Components tree. The frontend can request more information whenever required. Automatically logs React component stack warnings React DevTools 4.0 now provides an option to automatically append component stack information to the console in the development phase. This will enable developers to identify where exactly in the component tree failure has happened. To disable this feature just navigate to the General settings panel and uncheck the “Append component stacks to warnings and errors.” Source: React Components tree updates Improved hooks support: Hooks allow you to use state and other React features without writing a class. In React DevTools 4.0, hooks have the same level of support as props and state. Component filters: Navigating through large component trees can often be tiresome. Now, you can easily and quickly find the component you are looking for by applying the component filters. "Rendered by" list and an owners tree: React DevTools 4.0 now has a new "rendered by" list in the right-hand pane that will help you quickly step through the list of owners. There is also an owners tree, the inverse of the "rendered by" list, which lists all the things that have been rendered by a particular component. Suspense toggle: The experimental Suspense API allows you to “suspend” the rendering of a component until a condition is met. In <Suspense> components you can specify the loading states when components below it are waiting to be rendered. This DevTools release comes with a toggle to let you test these loading states. Source: React Profiler changes Import and export profiler data: The profiler data can now be exported and shared among other developers for better collaboration. Source: React Reload and profile: React profiler collects performance information each time the application is rendered. This helps you identify and rectify any possible performance bottlenecks in your applications. In previous versions, DevTools only allowed profiling a “profiling-capable version of React.” So, there was no way to profile the initial mount of an application. This is now supported with a "reload and profile" action. Component renders list: The profiler in React DevTools 4.0 displays a list of each time a selected component was rendered during a profiling session. You can use this list to quickly jump between commits when analyzing a component’s performance. You can check out the release notes of React DevTools 4.0 to know what other features have landed in this release. React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more React Native 0.60 releases with accessibility improvements, AndroidX support, and more React Native VS Xamarin: Which is the better cross-platform mobile development framework?
Read more
  • 0
  • 0
  • 18366
article-image-qt-for-python-5-12-released-with-pyside2-qt-gui-and-more
Amrata Joshi
24 Dec 2018
4 min read
Save for later

Qt for Python 5.12 released with PySide2, Qt GUI and more

Amrata Joshi
24 Dec 2018
4 min read
Last week, Qt introduced Qt for Python 5.12, an official set of Python bindings for Qt, used for simplifying the creation of innovative and immersive user interfaces for Python applications. With Qt for Python 5.12, it is possible to quickly visualize the massive amounts of data tied to their Python development projects. https://twitter.com/qtproject/status/1076003585979232256 Qt for Python 5.12 comes with a cross-platform environment for all development needs. Qt’s user interface development framework features APIs and expansive graphics libraries. Qt for Python 5.12 provides the developers with a user-friendly platform. It is fully supported by the Qt Professional Services team of development experts and practitioners, as well as Qt’s global community. Lars Knoll, CTO of Qt, said, “Considering the huge data sets that Python developers work with on a daily basis, Qt’s graphical capabilities makes it a perfect fit for the creation of immersive Python user interfaces. With Qt for Python 5.12, our customers can build those user interfaces faster and more easily than ever before – with the knowledge that they are backed by a global team of Qt and user interface experts.” Features of Qt for Python 5.12 PySide2 Qt comes with a C++ framework, combined with the PySide2 Python module that offers a comprehensive set of bindings between Python and Qt Qt GUI Creation Qt Graphical User Interface (GUI) creation consists of the following functional modules: Qt Widgets: The Qt Widgets Module comes with a set of user interface elements for creating classic desktop-style user interfaces. Qt Quick: The Qt Quick module, a standard library for writing QML applications, contains Quick Controls for creating fluid user interfaces. Qt QML: The Qt QML module features a framework for developing applications and libraries with the QML language, a declarative language that allows user interfaces to be described in terms of their visual components. Environment familiarity: Qt for Python 5.12 comes with a familiar development environment for Python developers. PyPI: Python Package Index (PyPI) makes the installation process of Qt for Python 5.12 easy. VFX Reference Platform integration: Qt and Qt for Python 5.12 are integral parts of the VFX Reference Platform, a set of tool and library versions used for building software for the VFX industry. Qt 3D Animation: The Qt 3D animation module features a set of prebuilt elements to help developers get started with Qt 3D. Qt Sql: It provides a driver layer, SQL API layer, and a user interface layer for SQL databases. Qt for Python 5.12 is available under commercial licensing, as part of the products Qt for Application Development and Qt for Device Creation, and as open-source under the LGPLv3 license. Qt TextToSpeech: It provides an API for accessing text-to-speech engines. Easy and quick development Development with Qt for Python 5.12 is fun, fast and flexible. Developers can easily work on their applications using Qt for Python 5.12. Developers can power their UI development by utilizing ready-made widgets, controls, beautiful charts, and data visualizations and create stunning 2D/3D graphics for Python projects. Qt Community Developers can exchange ideas, learn, share, and connect with the Qt community. Global Qt Services Global Qt services provide tailored support at every stage of the product development lifecycle. What’s next in Qt for Python The team at Qt might simplify the deployment of PySide2 applications. They might also provide a smoother interaction with other Python modules and support other platforms like embedded and mobile. Users are excited about this project and are eagerly waiting for the stable release. Qt for Python will be helpful for developers as it makes the process of developing desktop apps easier. But few users still are with PyQt5 as the stable release for Qt for python hasn’t been rolled out yet. The switch from PyQt to PySide might be difficult for many. To know more about Qt for Python 5.12, check out Qt’s official website. Getting started with Qt Widgets in Android Qt Design Studio 1.0 released with Qt photoshop bridge, timeline based animations and Qt live preview Qt team releases Qt Creator 4.8.0 and Qt 5.12 LTS
Read more
  • 0
  • 0
  • 18366

article-image-windows-server-2019-comes-with-security-storage-and-other-changes
Prasad Ramesh
21 Dec 2018
5 min read
Save for later

Windows Server 2019 comes with security, storage and other changes

Prasad Ramesh
21 Dec 2018
5 min read
Today, Microsoft unveiled new features of Windows Server 2019. The new features are based on four themes—hybrid, security, application platform, and Hyper-Converged Infrastructure (HCI). General changes Windows Server 2019, being a Long-Term Servicing Channel (LTSC) release, includes Desktop Experience. During setup, there are two options to choose from: Server Core installations or Server with Desktop Experience installations. A new feature called System Insights brings local predictive analytics capabilities to Windows Server 2019. This feature is powered by machine learning and aimed to help users reduce operational expenses associated with managing issues in Windows Server deployments. Hybrid cloud in Windows Server 2019 Another feature called the Server Core App Compatibility feature on demand (FOD) greatly improves the app compatibility in the Windows Server Core installation option. It does so by including a subset of binaries and components from Windows Server with the Desktop Experience included. This is done without adding the Windows Server Desktop Experience graphical environment itself. The purpose is to increase the functionality of Windows server while keeping a small footprint. This feature is optional and is available as a separate ISO to be added to Windows Server Core installation. New measures for security There are new changes made to add a new protection protocol, changes in virtual machines, networking, and web. Windows Defender Advanced Threat Protection (ATP) Now, there is a Windows Defender program called Advanced Threat Protection (ATP). ATP has deep platform sensors and response actions to expose memory and kernel level attacks. ATP can respond via suppressing malicious files and also terminating malicious processes. There is a new set of host-intrusion prevention capabilities called the Windows Defender ATP Exploit Guard. The components of ATP Exploit Guard are designed to lock down and protect a machine against a wide variety of attacks and also block behaviors common in malware attacks. Software Defined Networking (SDN) SDN delivers many security features which increase customer confidence in running workloads, be it on-premises or as a cloud service provider. These enhancements are integrated into the comprehensive SDN platform which was first introduced in Windows Server 2016. Improvements to shielded virtual machines Now, users can run shielded virtual machines on machines which are intermittently connected to the Host Guardian Service. This leverages the fallback HGS and offline mode features. There are troubleshooting improvements to shield virtual machines by enabling support for VMConnect Enhanced Session Mode and PowerShell Direct. Windows Server 2019 now supports Ubuntu, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server inside shielded virtual machines. Changes for faster and safer web Connections are coalesced to deliver uninterrupted and encrypted browsing. For automatic connection failure mitigation and ease of deployment, HTTP/2’s server-side cipher suite negotiation is upgraded. Storage Three storage changes are made in Windows Server 2019. Storage Migration Service It is a new technology that simplifies migrating servers to a newer Windows Server version. It has a graphical tool that lists data on servers and transfers the data and configuration to newer servers. Their users can optionally move the identities of the old servers to the new ones so that apps and users don’t have to make changes. Storage Spaces Direct There are new features in Storage Spaces Direct: Deduplication and compression capabilities for ReFS volumes Persistent memory has native support Nested resiliency for 2 node hyper-converged infrastructure at the edge Two-server clusters which use a USB flash drive as a witness Support for Windows Admin Center Display of performance history Scale up to 4 petabytes per cluster Mirror-accelerated parity is two times faster Drive latency outlier detection Fault tolerance is increased by manually delimiting the allocation of volumes Storage Replica Storage Replica is now also available in Windows Server 2019 standard edition. A new feature called test failover allows mounting of destination storage to validate replication or backup data. Performance improvements are made and Windows Admin Center support is added. Failover clustering New features in failover clustering include: Addition of cluster sets and Azure-aware clusters Cross-domain cluster migration USB witness Cluster infrastructure improvements Cluster Aware Updating supports Storage Spaces Direct File share witness enhancements Cluster hardening Failover Cluster no longer uses NTLM authentication Application platform changes in Windows Server 2019 Users can now run Windows and Linux-based containers on the same container host by using the same docker daemon. Changes are being continually done to improve support for Kubernetes. A number of improvements are made to containers such as changes to identity, compatibility, reduced size, and higher performance. Now, virtual network encryption allows virtual network traffic encryption between virtual machines that communicate within subnets and are marked as Encryption Enabled. There are also some improvements to network performance for virtual workloads, time service, SDN gateways, new deployment UI, and persistent memory support for Hyper-V VMs. For more details, visit the Microsoft website. OpenSSH, now a part of the Windows Server 2019 Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019
Read more
  • 0
  • 0
  • 18362
Modal Close icon
Modal Close icon