Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-postgresql-wins-dbms-of-the-year-2018-beating-mongodb-and-redis-in-db-engines-ranking
Amrata Joshi
09 Jan 2019
4 min read
Save for later

PostgreSQL wins ‘DBMS of the year’ 2018 beating MongoDB and Redis in DB-Engines Ranking

Amrata Joshi
09 Jan 2019
4 min read
Last week, DB Engines announced PostgreSQL as the Database Management System (DBMS) of the year 2018, as it gained more popularity in the DB-Engines Ranking last year than any of the other 343 monitored systems. Jonathan S. Katz, PostgreSQL contributor, said, "The PostgreSQL community cannot succeed without the support of our users and our contributors who work tirelessly to build a better database system. We're thrilled by the recognition and will continue to build a database that is both a pleasure to work with and remains free and open source." PostgreSQL, which will turn 30 this year has won the DBMS title for the second time in a row. It has established itself as the preferred data store amongst developers and has been appreciated for its stability and feature set. In the DBMS market, various systems use PostgreSQL as their base technology, this itself justifies that how well-established PostgreSQL is. Simon Riggs, Major PostgreSQL contributor, said, "For the second year in a row, the PostgreSQL team thanks our users for making PostgreSQL the DBMS of the Year, as identified by DB-Engines. PostgreSQL's advanced features cater to a broad range of use cases all within the same DBMS. Rather than going for edge case solutions, developers are increasingly realizing the true potential of PostgreSQL and are relying on the absolute reliability of our hyperconverged database to simplify their production deployments." How the DB-Engines Ranking scores are calculated For determining the DBMS of the year, the team at DB Engines subtracted the popularity scores of January 2018 from the latest scores of January 2019. The team used a difference of these numbers instead of percentage because that would favor systems with tiny popularity at the beginning of the year. The popularity of a system is calculated by using the parameters, such as the number of mentions of the system on websites, the number of mentions in the results of search engine queries. The team at DB Engines uses Google, Bing, and Yandex for this measurement. In order to count only relevant results, the team searches for <system name> together with the term database, e.g. "Oracle" and "database".The next measure is known as General interest in the system, for which the team uses the frequency of searches in Google Trends. The number of related questions and the number of interested users on the well-known IT-related Q&A site such as Stack Overflow and DBA Stack Exchange are also checked in this process. For calculating the ranking, the team also uses the number of offers on the leading job search engines Indeed and Simply Hired. A number of profiles in professional networks such as LinkedIn and Upwork in which the system is mentioned is also taken into consideration. The number of tweets in which the system is mentioned is also counted. The calculated result is a list of DBMSs sorted by how much they managed to increase their popularity in 2018. 1st runner-up: MongoDB For 2018, MongoDB is the first runner-up and has previously won the DBMS of the year in 2013 and 2014. Its growth in popularity has even accelerated ever since, as it is the most popular NoSQL system. MongoDB keeps on adding functionalities that were previously outside the NoSQL scope. Lat year, MongoDB also added ACID support, which got a lot of developers convinced, to rely on it with critical data. With the improved support for analytics workloads, MongoDB is a great choice for a larger range of applications. 2nd runner-up: Redis Redis, the most popular key-value store got the third place for DBMS of the year 2018. It has been in the top three DBMS of the year for 2014. It is best known as high-performance and feature-rich key-value store. Redis provides a loadable modules system, which means third parties can extend the functionality of Redis. These modules offer a graph database, full-text search, and time-series features, JSON data type support and much more. PipelineDB 1.0.0, the high performance time-series aggregation for PostgreSQL, released! Devart releases standard edition of dbForge Studio for PostgreSQL MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code
Read more
  • 0
  • 0
  • 17213

article-image-limited-availability-of-digitalocean-kubernetes-announced
Melisha Dsouza
03 Oct 2018
3 min read
Save for later

Limited Availability of DigitalOcean Kubernetes announced!

Melisha Dsouza
03 Oct 2018
3 min read
On Monday, the Kubernetes team announced that DigitalOcean, which was available in Early Access, is now accessible as Limited Availability. DigitalOcean simplifies the container deployment process that accompanies plain Kubernetes and offers Kubernetes container hosting services. Incorporating DigitalOcean’s trademark simplicity and ease of use, they aim to reduce the headache involved in setting up, managing and securing Kubernetes clusters. DigitalOcean incidentally are also the people behind Hacktoberfest which runs all of October in partnership with GitHub to promote open source contribution. The Early Access availability was well received by users who commented on the simplicity of configuring and provisioning a cluster. They appreciated that deploying and running containerized services consumed hardly any time. Users also brought to light issues and feedback that was utilized to increase reliability and resolve a number of bugs, thus improving user experience in the limited availability of DigitalOcean Kubernetes. The team also notes that during early access, they had a limited set of free hardware resources for users to deploy to. This restricted the total number of users they could provide access to. In the Limited Availability phase, the team hopes to open up access to anyone who requests it. That being said, the Limited Availability will be a paid product. Why should users consider DigitalOcean Kubernetes? Each customer has their own Dedicated Managed Cluster. This provides security and isolation for their containerized applications with access to the full Kubernetes API. DigitalOcean products provide storage for any amount of data.   Cloud Firewalls make it easy to manage network traffic in and out of the Kubernetes cluster. DigitalOcean provides cluster security scanning capabilities to alert users of flaws and vulnerabilities. In typical Kubernetes environments; metrics, logs, and events can be lost if nodes are spun down. To help developers learn from the performance of past environments, DigitalOcean stores this information separately from the node indefinitely. To know more about these features, head over to their official blog page. Some benefits for users of Limited Availability: Users will be able to provision Droplet workers in many more of regions with full support. To test out their containers in an orchestrated environment, they can start with a single node cluster using a $5/mo Droplet. As they scale their applications, users can add worker pools of various Droplet sizes, attach persistent storage using DigitalOcean Block Storage for $0.10/GB per month, and expose Kubernetes services with a public IP using $10/mo Load Balancers. This is a highly available service designed to protect against application or hardware failures while spreading traffic across available resources. Looks like users are really excited about this upgrade: Source: DigitalOcen Blog Users that have already signed up for Early Access, will receive an email shortly with details about how to get started. To know more about this news, head over to DigitalOcean’s Blog post. Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads  
Read more
  • 0
  • 0
  • 17208

article-image-researchers-input-rabbit-duck-illusion-to-google-cloud-vision-api-and-conclude-it-shows-orientation-bias
Bhagyashree R
11 Mar 2019
3 min read
Save for later

Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias

Bhagyashree R
11 Mar 2019
3 min read
When last week, Janelle Shane, a Research Scientist in optics, fed the infamous rabbit-duck illusion example to the Google Cloud Vision API, it gave “rabbit” as a result. However, when the image was rotated at a different angle, the Google Cloud Vision API predicted a “duck”. https://twitter.com/JanelleCShane/status/1103420287519866880 Inspired by this, Max Woolf, a data scientist at Buzzfeed, further tested and concluded that the result really varies based on the orientation of the image: https://twitter.com/minimaxir/status/1103676561809539072 Google Cloud Vision provides pretrained API models that allow you to derive insights from input images. The API classifies images into thousands of categories, detects individual objects and faces within images, and reads printed words within images. You can also train custom vision models with AutoML Vision Beta. Woolf used Python for rotating the image and get predictions from the API for each rotation. He built the animations with R, ggplot2, and gganimate. To render these animations he used ffmpeg. Many times, in deep learning, a model is trained using a strategy in which the input images are rotated to help the model better generalize. Seeing the results of the experiment, Woolf concluded, “I suppose the dataset for the Vision API didn't do that as much / there may be an orientation bias of ducks/rabbits in the training datasets.” The reaction to this experiment was pretty torn. While many Reddit users felt that there might be an orientation bias in the model, others felt that as the image is ambiguous there is no “right answer” and hence there is no problem with the model. One of the Redditor said, “I think this shows how poorly many neural networks are at handling ambiguity.” Another Redditor commented, “This has nothing to do with a shortcoming of deep learning, failure to generalize, or something not being in the training set. It's an optical illusion drawing meant to be visually ambiguous. Big surprise, it's visually ambiguous to computer vision as well. There's not 'correct' answer, it's both a duck and a rabbit, that's how it was drawn. The fact that the Cloud vision API can see both is actually a strength, not a shortcoming.” Woolf has open-sourced the code used to generate this visualization on his GitHub page, which also includes a CSV of the prediction results at every rotation. In case you are more curious, you can test the Cloud Vision API with the drag-and-drop UI provided by Google. Google Cloud security launches three new services for better threat detection and protection in enterprises Generating automated image captions using NLP and computer vision [Tutorial] Google Cloud Firestore, the serverless, NoSQL document database, is now generally available  
Read more
  • 0
  • 0
  • 17199

article-image-darpas-2-billion-ai-next-campaign-includes-a-next-generation-nonsurgical-neurotechnology-n3-program
Savia Lobo
11 Sep 2018
3 min read
Save for later

DARPA’s $2 Billion ‘AI Next’ campaign includes a Next-Generation Nonsurgical Neurotechnology (N3) program

Savia Lobo
11 Sep 2018
3 min read
Last Friday (7th September, 2018), DARPA announced a multi-year investment of more than $2 billion in a new program called the ‘AI Next’ campaign. DARPA’s Agency director, Dr. Steven Walker, officially unveiled the large-scale effort during D60,  DARPA’s 60th Anniversary Symposium held in Maryland. This campaign seeks contextual reasoning in AI systems in order to create deeper trust and collaborative partnerships between humans and machines. The key areas the AI Next Campaign may include are: Automating critical DoD (Department of Defense) business processes, such as security clearance vetting in a week or accrediting software systems in one day for operational deployment. Improving the robustness and reliability of AI systems; enhancing the security and resiliency of machine learning and AI technologies. Reducing power, data, and performance inefficiencies. Pioneering the next generation of AI algorithms and applications, such as ‘explainability’ and commonsense reasoning. The Next-Generation Nonsurgical Neurotechnology (N3) program In the conference, DARPA officials also described the next frontier of neuroscience research: technologies for able-bodied soldiers that give them super abilities. Following this, they introduced the Next-Generation Nonsurgical Neurotechnology (N3) program, which was announced in March. This program aims at funding research on tech that can transmit high-fidelity signals between the brain and some external machine without requiring that the user is cut open for rewiring or implantation. Al Emondi, manager of N3, said to IEEE Spectrum that he is currently picking researchers who will be funded under the program and can expect an announcement in early 2019. The program has two tracks: Completely non-invasive: The N3 program aims for new non-invasive tech that can match the high performance currently achieved only with implanted electrodes that are nestled in the brain tissue and therefore have a direct interface with neurons—either recording the electrical signals when the neurons “fire” into action or stimulating them to cause that firing. Minutely invasive: DARPA says it doesn’t want its new brain tech to require even a tiny incision. Instead, minutely invasive tech might come into the body in the form of an injection, a pill, or even a nasal spray. Emondi imagines “nanotransducers” that can sit inside neurons, converting the electrical signal when it fires into some other type of signal that can be picked up through the skull. Justin Sanchez, director of DARPA’s Biological Technologies Office, said that making brain tech easy to use will open the floodgates. He added, “We can imagine a future of how this tech will be used. But this will let millions of people imagine their own futures”. To know more about the AI Next Campaign and the N3 program in detail, visit DARPA blog. Skepticism welcomes Germany’s DARPA-like cybersecurity agency – The federal agency tasked with creating cutting-edge defense technology DARPA on the hunt to catch deepfakes with its AI forensic tools underway  
Read more
  • 0
  • 0
  • 17196

article-image-game-rivals-microsoft-and-sony-form-a-surprising-cloud-gaming-and-ai-partnership
Sugandha Lahoti
17 May 2019
3 min read
Save for later

Game rivals, Microsoft and Sony, form a surprising cloud gaming and AI partnership

Sugandha Lahoti
17 May 2019
3 min read
Microsoft and Sony have been fierce rivals when it comes to gaming starting from 2001 when Microsoft’s Xbox challenged the Sony PlayStation 2. However, in an unusual announcement yesterday, Microsoft and Sony signed a memorandum of understanding to jointly explore the development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services. Sony and Microsoft will also explore collaboration in the areas of semiconductors and AI. For semiconductors, they will jointly develop new intelligent image sensor solutions.  In terms of AI, the parties will incorporate Microsoft’s AI platform and tools in Sony’s consumer products. Microsoft in a statement said,  “these efforts will also include building better development platforms for the content creator community,” seemingly stating that both companies will probably partner on future services aimed at creators and the gaming community. Rivals turned to Allies Sony’s decision to keep aside the rivalry and partner with Microsoft makes sense because of two main reasons. First, cloud streaming is considered the next big thing in gaming. Only three companies Microsoft, Google, and Amazon have enough cloud experience to present viable, modern cloud infrastructure. Although Sony has enough technical competence to build its own cloud streaming service, it makes sense to deploy via Microsoft’s Azure than scaling its own distribution systems. Microsoft is also happy to extend a welcoming hand to a customer as large as Sony. Moreover, neither Sony nor Microsoft is going to commit to focus on game streaming completely, as both already have consoles currently in development. This is unlike Amazon and Google, who are going to go full throttle in building game streaming. It makes sense that Sony chose to go with Microsoft putting enough resources into these efforts, and going so far as to collaborate, showing that they understand game streaming is something not to be looked down on for not having. Second, this partnership is also likely a direct competition to Google’s Stadia game streaming service, unveiled at Game Developers Conference 2019. Stadia is a cloud-based game streaming platform that aims to bring together, gamers, YouTube broadcasters, and game developers “to create a new experience”. The games get streamed from any data center to any device that can connect to the internet like TV, laptop, desktop, tablet, or mobile phone. Gamers can access their games anytime and virtually on any screen. And, game developers will be able to use nearly unlimited resources for developing games. Since all the graphics processing happens on off-site hardware, there will be little stress on your local hardware. “Sony has always been a leader in both entertainment and technology, and the collaboration we announced today builds on this history of innovation,” says Microsoft CEO Satya Nadella. “Our partnership brings the power of Azure and Azure AI to Sony to deliver new gaming and entertainment experiences for customers.” Twitter was filled with funny memes on this alliance and its direct contest with Stadia. https://twitter.com/MikieDaytona/status/1129076134950445056 https://twitter.com/shaunlabrie/status/1129144724646813696 https://twitter.com/kettleotea/status/1129142682004205569 Going forward, the two companies will share additional information when available. Read the official announcement here. Google announces Stadia, a cloud-based game streaming service, at GDC 2019 Microsoft announces Project xCloud, a new Xbox game streaming service Amazon is reportedly building a video game streaming service, says Information
Read more
  • 0
  • 0
  • 17190

article-image-microsoft-azure-now-supports-nvidia-gpu-cloud-ngc
Vijin Boricha
31 Aug 2018
2 min read
Save for later

Microsoft Azure now supports NVIDIA GPU Cloud (NGC)

Vijin Boricha
31 Aug 2018
2 min read
Yesterday, Microsoft announced NVIDIA GPU Cloud (NGC) support on its Azure platform. Following this, data scientists, researchers, and developers can build, test, and deploy GPU computing projects on Azure. With this availability, users can run containers from NGC with Azure giving them access to on-demand GPU computing that can scale as per their requirement. This eventually eliminates the complexity of software integration and testing. The need for NVIDIA GPU Cloud (NGC) It is challenging and time-consuming to build and test reliable software stacks to run popular deep learning software such as TensorFlow, Microsoft Cognitive Toolkit, PyTorch, and NVIDIA TensorRT. This is due to the operating level and updated framework dependencies. Finding, installing, and testing the correct dependency is quite a hassle as it is supposed to be done in a multi-tenant environment and across many systems. NGC eliminates these complexities by offering pre-configured containers with GPU-accelerated software. Users can now access 35 GPU-accelerated containers for deep learning software, high-performance computing applications, high-performance visualization tools and much more enabled to run on the following Microsoft Azure instance types with NVIDIA GPUs: NCv3 (1, 2 or 4 NVIDIA Tesla V100 GPUs) NCv2 (1, 2 or 4 NVIDIA Tesla P100 GPUs) ND (1, 2 or 4 NVIDIA Tesla P40 GPUs) According to NVIDIA, these same NVIDIA GPU Cloud (NGC) containers can also work across Azure instance types along with different types or quantities of GPUs. Using NGC containers with Azure is quite easy. Users just have to sign up for a free NGC account before starting, then visit Microsoft Azure Marketplace to find the pre-configured NVIDIA GPU Cloud Image for Deep Learning and high-performance computing. Once you launch the NVIDIA GPU instance on Azure, you can pull the containers you want from the NGC registry into your running instance. You can find detailed steps to setting up NGC in the Using NGC with Microsoft Azure documentation. Microsoft Azure’s new governance DApp: An enterprise blockchain without mining NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499  
Read more
  • 0
  • 0
  • 17185
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-libc-9-releases-with-explicit-support-for-webassembly-system-interface-wasi
Sugandha Lahoti
14 Oct 2019
2 min read
Save for later

Libc++ 9 releases with explicit support for WebAssembly System Interface (WASI)

Sugandha Lahoti
14 Oct 2019
2 min read
On Friday, Libc++ version 9 was released; libc++ is an implementation of the C++ standard library, targeting C++11, C++14 and above. Libc++ 9 is a part of the LLVM Compiler Infrastructure, release 9.0.0 which was made available in September. Libc++ 9 adds explicit support for WebAssembly System Interface (WASI) along with major improvements from the previous release and new feature work. Libc++ has also dropped support for GCC 4.9; they now support GCC 5.1 and above. WASI is a system interface for the WebAssembly platform. Currently, it supports sandboxed access to the filesystem via a POSIX-like API, as well as other basic interfaces like argv, environment variables, random numbers, and timers. There are three popular implementations of WASI: wasmtime, Mozilla’s WebAssembly runtime, Lucet, Fastly’s WebAssembly runtime, and a browser polyfill. Improvements in Libc ++ 9 Minor fixes to std::chrono operators. libc++ now correctly handles Objective-C++ ARC qualifiers in std::is_pointer. Front and back methods are added to std::span std::to_chars now adds leading zeros. Ensure std::tuple is trivially constructible. std::aligned_union now works in C++03. Output of nullptr to std::basic_ostream is formatted properly. P0608 is now implemented as a sane variant converting constructor. std::is_unbounded_array and std::is_bounded_array added to type traits. std::atomic now includes many new features and specialization Added std::midpoint and std::lerp math functions and std::is_constant_evaluated function Erase-like algorithms now return size type. Added contains method to container types. std::swap is now a constant expression. std::move and std::forward now both work in C++03 mode. People on Twitter were quite happy with WASI support in libc ++ https://twitter.com/Stephen_d2005/status/1178489876070535168 https://twitter.com/iwillrunoutofsp/status/1182702301062008832 You can also see the release notes for additional information. Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations LLVM 9 releases with official RISC-V target support, asm goto, Clang 9, and more LLVM’s Clang 9.0 to ship with experimental support for OpenCL C++17, asm goto initial support and more. LLVMs Arm stack protection feature turns ineffective when the stack is re-allocated
Read more
  • 0
  • 0
  • 17184

article-image-alibaba-launches-an-ai-chip-company-named-ping-tou-ge-to-boost-chinas-semiconductor-industry
Savia Lobo
24 Sep 2018
3 min read
Save for later

Alibaba launches an AI chip company named ‘Ping-Tou-Ge’ to boost China’s semiconductor industry

Savia Lobo
24 Sep 2018
3 min read
Alibaba Group now enters the semiconductor industry by launching its new subsidiary named ‘Ping-Tou-Ge’ that will develop computer chips specifically designed for artificial intelligence. The company made this announcement last week at its Computing Conference in Hangzhou. Why the name, ‘Ping-Tou-Ge’? “Ping-Tou-Ge” is a Mandarin nickname for the Honey Badger, an animal native to Africa, Southwest Asia, and the Indian subcontinent. Alibaba chief technology officer Jeff Zhang says, “Many people know that the honey badger is a legendary animal: it’s not afraid of anything and has skillful hunting techniques and great intelligence”. He further added, “Alibaba’s semiconductor company is new; we’re just starting out. And so we hope to learn from the spirit [of the honey badger]. A chip is small [like the honey badger], and we hope that such a small thing will produce great power.” Ping-Tou-Ge is one of Alibaba’s efforts to improve China's semiconductor industry The main reason for the creation of ‘Ping-Tou-Ge’ was the US ban on Chinese telecom giant ZTE, which brought a realization as to how much China's semiconductor industry depends heavily on imported chipsets. Alibaba has constantly been increasing its footprint in the chip industry. DAMO Academy, which was established in 2017, focuses on areas such as machine intelligence and data computing. Alibaba had also acquired Chinese chipmaker Hangzhou C-SKY Microsystems in April to enhance its own chip production capacity. C-SKY Microsystems is a designer of a domestically developed embedded chipset. Zhang Jianfeng, head of Alibaba's DAMO Academy said in a statement that the Hangzhou-based company will produce its first neural network chip in the second half of next year with an internally developed technology platform and a synergized ecosystem. Ping-Tou-Ge will combine the powers of both DAMO’s chip business and C-Sky Microsystems. It will operate independently in the development of its embedded chip series CK902 and its neural network chip Ali-NPU. The Ali-NPU chip is designed for AI inferencing in the field of image processing, machine learning, etc. Some of its features include: It is expected to be around 40 times more cost-effective than conventional chips perform 10 times better than mainstream CPU and GPU architecture AI chips in the current market cut down power and manufacturing costs to half Pingtouge will also focus on customized AI chips and embedded processors to support Alibaba's developing cloud and Internet of Things (IoT) business. These chips could be used in various industries such as vehicles, home appliances, and manufacturing. To know more about Ping-Tou-Ge in detail, visit MIT Technology Review blog. OpenSky is now a part of the Alibaba family Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Why Alibaba cloud could be the dark horse in the public cloud race
Read more
  • 0
  • 0
  • 17169

article-image-microsoft-releases-security-updates-a-wormable-threat-similar-to-wannacry-ransomware-discovered
Amrata Joshi
16 May 2019
3 min read
Save for later

Microsoft releases security updates: a “wormable” threat similar to WannaCry ransomware discovered

Amrata Joshi
16 May 2019
3 min read
Microsoft has taken steps to release security updates for unsupported but still widely-used Windows operating systems like XP and Windows 2003. The company took this move as a part of its May 14 Patch Tuesday, due to the discovery of a “wormable” flaw that could be a major threat similar to the WannaCry ransomware attacks of 2017. The WannaCry ransomware threat was quick to spread across the world in May 2017 due to a vulnerability that was prevalent among systems running Windows XP and older versions of Windows. On Tuesday, Microsoft released 16 updates that target at least 79 security issues in Windows and related software. Now let’s have a look at the vulnerabilities,  CVE-2019-0708 and CVE-2019-0863. CVE-2019-0708, remote desktop services vulnerability The  CVE-2019-0708 vulnerability is in remote desktop services into supported versions of Windows, including Windows 7, Windows Server 2008 R2, and Windows Server 2008. It is present in computers powered by Windows XP and Windows 2003. To attack the system, an unauthenticated attacker connects to the target system using Remote Desktop Protocol (RDP) and then sends specially crafted requests. This security update now corrects how Remote Desktop Services handles connection requests. Though the vulnerability CVE-2019-0708 does not affect Microsoft’s latest operating systems, including,  Windows 10, Windows 8, Windows 8.1, Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, or Windows Server 2012. The company hasn’t observed any evidence of attacks against this security flaw, but it might head off a serious and imminent threat. Simon Pope, director of incident response for the Microsoft Security Response Center said, “This vulnerability is pre-authentication and requires no user interaction. In other words, the vulnerability is ‘wormable,’ meaning that any future malware that exploits this vulnerability could propagate from vulnerable computer to vulnerable computer in a similar way as the WannaCry malware spread across the globe in 2017. It is important that affected systems are patched as quickly as possible to prevent such a scenario from happening.” CVE-2019-0863, zero-day vulnerability One of the security updates fixed a zero-day vulnerability, (CVE-2019-0863) in the Windows Error Reporting Service. An attacker who can successfully exploit this vulnerability can run arbitrary code in kernel mode.The attacker can then install programs; change, view, or delete data; or create new accounts with administrator privileges. An attacker has to gain unprivileged execution on the victim’s system in order to exploit the vulnerability. Microsoft’s security update addresses this vulnerability by correcting the way WER (Windows Error Reporting) handles files. According to Chris Goettl, director of product management for security vendor Ivanti, this vulnerability has already been seen in targeted attacks. Microsoft Office and Office365, Sharepoint, .NET Framework and SQL server are some of the other Microsoft products that received patches. To know more about this news, check out Microsoft’s page. #MSBuild2019: Microsoft launches new products to secure elections and political campaigns Microsoft Build 2019: Introducing Windows Terminal, application packed with multiple tab opening, improved text and more Microsoft Build 2019: Introducing WSL 2, the newest architecture for the Windows Subsystem for Linux  
Read more
  • 0
  • 0
  • 17168

Matthew Emerick
14 Oct 2020
2 min read
Save for later

Cache is king: Announcing lower pricing for Cloud CDN from Cloud Blog

Matthew Emerick
14 Oct 2020
2 min read
Organizations all over the world rely on Cloud CDN for fast, reliable web and video content delivery. Now, we’re making it even easier for you to take advantage of our global network and cache infrastructure by reducing the cost of Cloud CDN for your content delivery going forward. First, we’re reducing the price of cache fill (content fetched from your origin) charges across the board, by up to 80%. You still get the benefit of our global private backbone for cache fill though—ensuring continued high performance, at a reduced cost. We’ve also removed cache-to-cache fill charges and cache invalidation charges for all customers going forward. This price reduction, along with our recent introduction of a new set of flexible caching capabilities, makes it even easier to use Cloud CDN to optimize the performance of your applications. Cloud CDN can now automatically cache web assets, video content or software downloads, control exactly how they should be cached, and directly set response headers to help meet web security best practices. You can review our updated pricing in our public documentation, and customers egressing over 1PB per month should reach out to our sales team to discuss commitment-based discounts as part of your migration to Google Cloud. To read more about Cloud CDN, or begin using it, start here.
Read more
  • 0
  • 0
  • 17163
article-image-stack-overflow-celebrates-its-10th-birthday-as-the-most-trusted-developer-community
Sugandha Lahoti
28 Sep 2018
2 min read
Save for later

Stack Overflow celebrates its 10th birthday as the most trusted developer community

Sugandha Lahoti
28 Sep 2018
2 min read
Ten Years ago, on September 15, 2008, Stack Overflow’s public beta went live. Yesterday, Stack Overflow put up a post to commemorate the 10 year anniversary of Stack Overflow. Back in 2008, Joel Spolsky got frustrated searching for a specific programming question into Google. This inspired him to start a programmer-specific Q&A portal, where developers can ask their programming related questions by combining the idea of a Q&A site with voting and editing. Since then, 9.3 million users have provided 25 million answers to 16 million questions and it has grown into a trusted online community for developers to learn, share their knowledge, and build their careers. Per the stats posted on their website, “Every 5.1 seconds, someone takes time out of their day and posts an answer, to help a complete stranger on the internet. And since 2007, those answers have been found 12.3 billion times by developers in need. We estimate that’s saved developers roughly 3.1 billion hours.” Stack Overflow has always been providing new tools to ease the workload of developers. Most recently, it launched the Stack Overflow for Teams which provides unlimited private questions and answers for a single team hosted on Stack Overflow. It also partnered with IBM to bring learning and development to the Artificial Intelligence community. Earlier this month, it launched an update to the Salary Calculator. It is a tool that allows both developers and employers to find typical salaries for the software industry. Last month, it expanded its Code of Conduct which further builds of its previous “Being Nice” motto to include more virtues around kindness, collaboration, and mutual respect. Here’s a fun video where Stack Overflow engineers have shared their views for its 10th anniversary on their journey with the company, the community, and developing the future. https://www.youtube.com/watch?v=QwS1r1mc888 4 surprising things from StackOverflow’s 2018 survey. StackOverflow just updated its developers' salary calculator; includes 8 new countries in 2018. StackOverflow Developer Survey 2018: A Quick Overview.
Read more
  • 0
  • 0
  • 17160

article-image-windows-sandbox-an-environment-to-safely-test-exe-files-is-coming-to-windows-10-next-year
Prasad Ramesh
20 Dec 2018
2 min read
Save for later

Windows Sandbox, an environment to safely test EXE files is coming to Windows 10 next year

Prasad Ramesh
20 Dec 2018
2 min read
Microsoft will be offering a new tool called Windows Sandbox next year with a Windows 10 update. Revealed this Tuesday, it provides an environment to safely test EXE applications before running them on your computer. Windows sandbox features Windows Sandbox is an isolated desktop environment where users can run untrusted software without any risk of them having any effects on your computer. Any application you install in Windows Sandbox is contained in the sandbox and cannot affect your computer. All software with their files and state are permanently deleted when a Windows Sandbox is closed. You need Windows 10 Pro or Windows 10 Enterprise to use it and will be shipped with an update, no separate download needed. Every run of Windows Sandbox is new and runs like a fresh installation of Windows. Everything is deleted when you close Windows Sandbox. It uses hardware-based virtualization for kernel isolation based on Microsoft’s hypervisor. A separate kernel isolates it from the host machine. It has an integrated kernel scheduler and virtual GPU. Source: Microsoft website Requirements In order to use this new feature based on Hyper-V, you’ll need, AMD64 architecture, virtualization capabilities enabled in BIOS, minimum 4GB RAM (8GB recommended), 1 GB of free disk space (SSD recommended), and dual-core CPU (4 cores with hyperthreading recommended). What are the people saying The general sentiment towards this release is positive. https://twitter.com/AnonTechOps/status/1075509695778041857 However, a comment on Hacker news suggests that this might not be that useful for its intended purpose: “Ironically, even though the recommended use for this in the opening paragraph is to combat malware, I think that will be the one thing this feature is no good at. Doesn’t even moderately sophisticated malware these days try to detect if it’s in a sandbox environment? A fresh-out-of-the-box Windows install must be a giant red flag for that.” Meanwhile, if you’re on Windows 7 or Windows 8, you can try Sandboxie. For more technical details under the hood of Sandbox, visit the Microsoft website. Oracle releases VirtualBox 6.0.0 with improved graphics, user interface and more Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency” Are containers the end of virtual machines?
Read more
  • 0
  • 0
  • 17155

article-image-google-adanet-a-tensorflow-based-automl-framework
Sugandha Lahoti
31 Oct 2018
3 min read
Save for later

Google AdaNet, a TensorFlow-based AutoML framework

Sugandha Lahoti
31 Oct 2018
3 min read
Google researchers have come up with a new AutoML framework, which can automatically learn high-quality models with minimal expert intervention. Google AdaNet is a fast, flexible, and lightweight TensorFlow-based framework for learning a neural network architecture and learning to ensemble to obtain even better models. How Google Adanet works? As machine learning models increase in number, Adanet will automatically search over neural architectures, and learn to combine the best ones into a high-quality model. Adanet implements an adaptive algorithm for learning a neural architecture as an ensemble of subnetworks. It can add subnetworks of different depths and widths to create a diverse ensemble, and trade off performance improvement with the number of parameters. This saves ML engineers the time spent selecting optimal neural network architectures. Source: Google Adanet: Built on Tensorflow AdaNet implements the TensorFlow Estimator interface. This interface simplifies machine learning programming by encapsulating training, evaluation, prediction and export for serving. Adanet also integrates with open-source tools like TensorFlow Hub modules, TensorFlow Model Analysis, and Google Cloud’s Hyperparameter Tuner. TensorBoard integration helps to monitor subnetwork training, ensemble composition, and performance. Tensorboard is one of the best TensorFlow features for visualizing model metrics during training. When AdaNet is done training, it exports a SavedModel that can be deployed with TensorFlow Serving. How to extend AdaNet to your own projects Machine learning engineers and enthusiasts can define their own AdaNet adanet.subnetwork.Builder using high level TensorFlow APIs like tf.layers. Users who have already integrated a TensorFlow model in their system can use the adanet.Estimator to boost model performance while obtaining learning guarantees. Users are also invited to use their own custom loss functions via canned or custom tf.contrib.estimator.Heads in order to train regression, classification, and multi-task learning problems. Users can also fully define the search space of candidate subnetworks to explore by extending the adanet.subnetwork.Generator class. Experiments: NASNet-A versus AdaNet Google researchers took an open-source implementation of a NASNet-A CIFAR architecture and transformed it into a subnetwork. They were also able to improve upon CIFAR-10 results after eight AdaNet iterations. The model achieves this result with fewer parameters: [caption id="attachment_23810" align="aligncenter" width="640"] Performance of a NASNet-A model versus AdaNet learning to combine small NASNet-A subnetworks on CIFAR-10[/caption] Source: Google You can checkout the Github repo, and walk through the tutorial notebooks for more details. You can also have a look at the research paper. Top AutoML libraries for building your ML pipelines. Anatomy of an automated machine learning algorithm (AutoML) AmoebaNets: Google’s new evolutionary AutoML
Read more
  • 0
  • 0
  • 17150
article-image-react-native-0-57-coming-soon-with-new-ios-webviews
Bhagyashree R
28 Aug 2018
2 min read
Save for later

React Native 0.57 coming soon with new iOS WebViews

Bhagyashree R
28 Aug 2018
2 min read
WebView is used to display web content in your iOS applications.Yesterday, React Native announced a new native iOS backend to the WebView component that uses WKWebView. This WebView component will be available in the upcoming React Native 0.57 release. Apple has discouraged the use of UIWebViews for a long time. This is due to its being formally deprecated in the upcoming months. It is advised that you use the WKWebView class instead of using UIWebView for the apps that run in iOS 8 and above. You can use the useWebKit property to opt into this implementation: What problems does WKWebView solve? Earlier, to embed web content in our applications we had two options, UIWebView and WKWebView. UIWebView is the original WebView introduced in 2.0. It is known to have some pitfalls and problems. The main drawback is that it has no legitimate way to facilitate communication between the JavaScript running in the WebView and React Native. The newly introduced WKWebView aims to solve this problem. Other benefits of WKWebView over UIWebView include faster JavaScript execution and a multi-process architecture. What to consider when switching to WKWebView? You must avoid using the following properties, for the time being: Using automaticallyAdjustContentInsets and contentInsets may result in inconsistent behavior. When we add contentInsets to UIWebView, the viewport size changes (gets smaller, if the content insets are positive). In case of WKWebView, the viewport size remain unchanged. If you use the backgroundColor property there is a chance that WKWebView may render transparent backgrounds differently from UIWebView and can also flicker into view. Per React Native community, WKWebView doesn't support the scalesPageToFit property. Hence, they couldn't implement this on the WebView React Native component. To know more about the new WebViews, check out their announcement on React Native’s official website and their GitHub repository. Apple releases iOS 12 beta 2 with screen time and battery usage updates among others React Native 0.56 is now available Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others
Read more
  • 0
  • 0
  • 17137

article-image-githubs-new-integration-for-jira-software-cloud-aims-to-provide-teams-a-seamless-project-management-experience
Bhagyashree R
08 Oct 2018
2 min read
Save for later

GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management experience

Bhagyashree R
08 Oct 2018
2 min read
Last week, GitHub announced that they have built a new integration to enable software teams to connect their code on GitHub.com to their projects on Jira Software Cloud. This integration updates Jira with data from GitHub, providing a better visibility into the current status of your project. What are the advantages of this new GitHub and Jira integration? No need to constantly switch between GitHub and Jira With your GitHub account linked to Jira, your team can see the branches, commit messages, and pull request in the context of the Jira tickets they’re working on. This integration provides a deeper connection by allowing you to view references to Jira in GitHub issues and pull requests. Source: GitHub Improved capabilities This new GitHub-managed app provides improved security, along with the following capabilities: Smart commits: You can use smart commits to update the status, leave a comment, or log time without having to leave your command line or GitHub View from within a Jira ticket: You can view associated pull requests, commits, and branches from within a Jira ticket Searching Jira issues: You can search for Jira issues based on related GitHub information, such as open pull requests. Check the status of development work: The status of development work can be seen from within Jira projects Keep Jira issues up to date: You can automatically keep your Jira issues up to date while working in GitHub Install the Jira Software and GitHub app to connect your GitHub repositories to your Jira instance. The previous version of the Jira integration will be deprecated in favor of this new GitHub-maintained integration. Once the migration is complete, the legacy integration (DVCS connector) is disabled automatically. Read the full announcement at the GitHub blog. 4 myths about Git and GitHub you should know about GitHub addresses technical debt, now runs on Rails 5.2.1 GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub
Read more
  • 0
  • 0
  • 17137
Modal Close icon
Modal Close icon