Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-nvidia-gpus-offer-kubernetes-for-accelerated-deployments-of-artificial-intelligence-workloads
Savia Lobo
21 Jun 2018
2 min read
Save for later

Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads

Savia Lobo
21 Jun 2018
2 min read
Nvidia recently announced that they will make Kubernetes available on its GPUs, at the Computer Vision and Pattern Recognition (CVPR) conference. Although it is not generally available, developers will be allowed to use this technology in order to test the software and provide their feedback. Source: Kubernetes on Nvidia GPUs Kubernetes on NVIDIA GPUs will allow developers and DevOps engineers to build and deploy a scalable GPU-accelerated deep learning training. It can also be used to create inference applications on multi-cloud GPU clusters. Using this novel technology, developers can handle the growing number of AI applications and services. This will be possible by automating processes such as deployment, maintenance, scheduling and operation of GPU-accelerated application containers. One can orchestrate deep learning and HPC applications on heterogeneous GPU clusters. It also includes easy-to-specify attributes such as GPU type and memory requirement. It also offers integrated metrics and monitoring capabilities for analyzing and improving GPU utilization on clusters. Interesting features of Kubernetes on Nvidia GPUs include: GPU support in Kubernetes can be used via the NVIDIA device plugin One can easily specify GPU attributes such as GPU type and memory requirements for deployment in heterogeneous GPU clusters Visualizing and monitoring GPU metrics and health with an integrated GPU monitoring stack of NVIDIA DCGM , Prometheus and Grafana Support for multiple underlying container runtimes such as Docker and CRI-O Officially supported on all NVIDIA DGX systems (DGX-1 Pascal, DGX-1 Volta and DGX Station) Read more about this exciting news on Nvidia Developer blog NVIDIA brings new deep learning updates at CVPR conference Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released! Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 18352

article-image-gnome-3-30-released-with-improved-performance
Sugandha Lahoti
06 Sep 2018
2 min read
Save for later

GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more

Sugandha Lahoti
06 Sep 2018
2 min read
The latest version of GNOME 3 has been released today. GNOME 3.30 features many significant performance improvements. In total, the release incorporates 24845 changes, made by approximately 801 contributors. GNOME is a desktop environment composed of free and open-source software that runs on Linux and most BSD derivatives. [box type="shadow" align="" class="" width=""]Fun Fact: 3.30 has been named “Almería” in recognition of this year’s GUADEC organizing team. GUADEC is GNOME’s primary annual conference which was held in Almería, Spain this year.[/box] Improvements to Desktop performance The entire desktop now uses fewer system resources. Users can now run multiple apps at once without encountering performance issues. Improved Screen Sharing With GNOME 3.30, it is now easier than ever to control screen sharing and remote desktop sessions. A newly added system menu displays an indicator when a remote connection is active, making it easy to stop the session when finished. Update Flatpaks Automatically Flatpak is an emerging technology that makes getting apps fast and secure. Many new apps are already available on Flathub which is a repository of curated Flatpaks. GNOME software manager, can now automatically update installed Flatpaks. Focus on Content Web, the GNOME browser, now comes with a new minimal reader view. Web can toggle between the normal view and the clean, minimal reader view when viewing a compatible web page. The minimal reader view removes irrelevant menus, images, and content not related to the article or document. Updates to GNOME Virtual machine application Boxes, the GNOME virtual machine application, can now connect to remote Windows servers using the Remote Desktop Protocol (RDP). Boxes also now has the ability to import OVA files, making sharing virtual machines even easier. Gaming Application changes Games, the retro gaming application can now be navigated by gamepad making it faster to use: The keyboard is mappable to gamepad inputs. Additional details about each available game is displayed in the collection view. The Flatpak bundles 4 more emulator cores. GNOME 3.30 introduces a new podcast app called Podcasts that lets you subscribe and listen to your favorite podcasts, right from your desktop. These are just a select few updates. For a complete list of updates, read the GNOME Blog. Deploying HTML5 Applications with GNOME. Install GNOME-Shell on Ubuntu 9.10 “Karmic Koala”. Is Linux hard to learn?
Read more
  • 0
  • 0
  • 18332

article-image-the-union-types-2-0-proposal-gets-a-go-ahead-for-php-8-0
Bhagyashree R
11 Nov 2019
3 min read
Save for later

The Union Types 2.0 proposal gets a go-ahead for PHP 8.0

Bhagyashree R
11 Nov 2019
3 min read
Last week, the Union Types 2.0 RFC by Nikita Popov, a software developer at JetBrains got accepted for PHP 8.0 with 61 votes in favor and 5 against. Popov submitted this RFC as a GitHub pull request to check whether it would be a better medium for RFC proposals in the future, which got a positive response from many PHP developers. https://twitter.com/enunomaduro/status/1169179343580516352 What the Union Types 2.0 RFC proposes PHP type declarations allow you to specify the type of parameters and return values acceptable by a function. Though for most of the functions, the acceptable parameters and possible return values will be of only one type, there are cases when they can be of multiple types. Currently, PHP supports two special union types. One is the nullable types that you can specify using the ‘?Type’ syntax to mark a parameter or return value as nullable. This means, in addition to the specified type, NULL can also be passed as an argument or return value. Another one is ‘array’ or ‘Traversable’ that you can specify using the special iterable type. The Union Types 2.0 RFC proposes to add support for arbitrary union types, which can be specified using the syntax T1|T2|... Support for Union types will enable developers to move more type information from ‘phpdoc’ into function signatures. Other advantages of arbitrary union types include early detection of mistakes and less boilerplate-y code compared to ‘phpdoc’. This will also ensure that type is checked during inheritance and are available through Reflection. This RFC does not contain any backward-incompatible changes. However, existing ReflectionType based code will have to be adjusted in order to support the processing of code that uses union types. The RFC for union types was first proposed 4 years ago by PHP open source contributors, Levi Morrison and Bob Weinand. This new proposal has a few updates compared to the previous one that Popov shared on the PHP mailing list thread: Updated to specify interaction with new language features, like full variance and property types. Updated for the use of the ?Type syntax rather than the Type|null syntax. It only supports "false" as a pseudo-type, not "true". Slightly simplified semantics for the coercive typing mode. In a Reddit discussion, many developers welcomed this decision. A user commented, “PHP 8 will be blazing. I can't wait for it.” While some others felt that this a one step backward. “Feels like a step backward. IMHO, a better solution would have been to add function overloading to the language, i.e. give the ability to add many methods with the same name, but different argument types,” a user expressed. You can read the Union Types 2.0 RFC to know more in detail. You can read the discussion about this RFC on GitHub. Symfony leaves PHP-FIG, the framework interoperability group Oracle releases GraphPipe: An open-source tool that standardizes machine learning model deployment Connecting your data to MongoDB using PyMongo and PHP
Read more
  • 0
  • 0
  • 18321

article-image-google-is-planning-to-bring-node-js-support-to-fuchsia
Natasha Mathur
20 Mar 2019
2 min read
Save for later

Google is planning to bring Node.js support to Fuchsia

Natasha Mathur
20 Mar 2019
2 min read
Google is, reportedly, planning to bring Node.js to Fuchsia. Yang Guo, a staff software engineer at Google, posted a tweet, yesterday, where he says he’s looking for a full-time software engineer at Google Munich, Germany, who can port Node.js to Fuchsia. https://twitter.com/hashseed/status/1108016705920364544 “We are interested in bringing JS as a programming language to that platform to add to the list of supported languages,” states Guo. Currently, Fuchsia supports languages such as C/C++, Dart, FIDL, Go, Rust, Python, and Flutter modules. Fuchsia is a new operating system that Google is currently working on. Google has been working over Fuchsia for over two years in the hope that it will replace the much dominant Android. Fuchsia is a capability-based operating system and is based on a new microkernel called "Zircon".   Zircon is the core platform responsible for powering the Fuchsia OS. It is made up of a microkernel (source in kernel/...) and a small set of userspace services, drivers, and libraries (source in system/...) that are necessary for the system to boot, talk to hardware, as well as load and run the userspace processes. The Zircon Kernel is what helps provide syscalls to Fuchsia, thereby, helping it manage processes, threads, virtual memory, inter-process communication, state changes, and locking. Fuchsia can run on a variety of platforms ranging from embedded systems to smartphones, and tablets. Earlier this year in January, 9to5Google published evidence, stating that Fuchsia can also run Android Applications. Apparently, a new change was spotted by the 9to5 team in the Android Open Source Project that makes use of a special version of ART (Android Runtime) to run Android apps. This feature would allow devices such as computers, wearables, etc to leverage Android apps in the Google Play Store. Public reaction to the news is positive, with people supporting the announcement: https://twitter.com/aksharpatel47/status/1108136513575882752 https://twitter.com/damarnez/status/1108090522508410885 https://twitter.com/risyasin/status/1108029764957294593 Google’s Smart Display – A push towards the new OS, Fuchsia Fuchsia’s Xi editor is no longer a Google project Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation
Read more
  • 0
  • 0
  • 18301

article-image-introducing-azure-devops-server-2019-rc1-with-better-ui-azure-sql-support-and-more
Melisha Dsouza
22 Nov 2018
3 min read
Save for later

Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more!

Melisha Dsouza
22 Nov 2018
3 min read
On 19th November, Microsoft announced the first release candidate (RC) of Azure DevOps Server 2019. Azure DevOps Server 2019 includes the new, fast, and clean Azure DevOps user interface and delivers the codebase of Microsoft Azure DevOps while being optimized for customers who prefer to self-host. Features of Azure DevOps Server 2019 In addition to existing SQL Server support, Azure DevOps Server also includes support for Azure SQL. This means customers can self-host Azure DevOps in their own datacenter using an on-premises SQL Server. They can take advantage of  Azure SQL capabilities and performance like backup features and scaling options, while reducing the administrative overhead of running the service and  self-hosting Azure DevOps in the cloud. Customers can use the globally available Microsoft hosted service to take advantage of automatic updates and automatic scaling of Azure DevOps Server. Azure DevOps Server 2019 includes a new release management interface. Customers can easily understand how their deployment is taking place - it gives them a better visibility of which bits are deployed to which environments and why. Customers can also mix and match agents self-hosted on-premises and in any cloud on Windows, Mac, or Linux while easily deploying to IaaS or PaaS in Azure as well as on-premises infrastructure. A new navigation and improved user experience in Azure DevOps A new feature introduced is ‘my work flyout’. This feature was developed after feedback obtained that when a customer is in one part of the product and wants some information from another part, they didn’t want to lose their context of the current task. With this new feature, customers can access this flyout from anywhere in the product, which will give them a quick glance at crucial information link work items, pull requests, and all favorites. Teams that use pull requests (PRs) and branch policies, there may be occasions when members need to override and bypass those policies. For teams to verify that those policy overrides are being used in the right situations, a new notification filter is added to allow users and teams to receive email alerts any time a policy is bypassed. Tests tab now gives a rich, in-context test information for Pipelines. It provides an in-progress test view, full page debugging experience, in context test history, reporting aborted test execution, and run level summary. The UI has undergone significant testing and the team suggests that for self-hosting customers the new navigation model may require updates to internal documentation and training. A direct upgrade to Azure DevOps Server is supported fromTeamm Foundation Server 2012 and newer. Previous versions of Team Foundation Server will stay on the old user interface. Check the Azure DevOps Server requirements and compatibility page to understand dependencies required for a self-hosted installation. Head over to Microsoft’s blog for more information on this news. You can download Azure DevOps Server 2019 RC1 and check out the release notes for all the features and information for this release. A multi-factor authentication outage strikes Microsoft Office 365 and Azure users Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report  
Read more
  • 0
  • 0
  • 18294

article-image-microsoft-officially-releases-microsoft-edge-canary-builds-for-macos-users
Bhagyashree R
21 May 2019
3 min read
Save for later

Microsoft officially releases Microsoft Edge canary builds for macOS users

Bhagyashree R
21 May 2019
3 min read
Yesterday, Microsoft officially made the canary builds of Chromium-based Microsoft Edge available for macOS 10.2 and above. This announcement follows the release of canary and developer previews of Microsoft Edge for Windows 10 users last month. https://twitter.com/MSEdgeDev/status/1130624513035591680 Edge for Mac already surfaced online, earlier this month. A Twitter user, who goes by the name WalkingCat, shared download links for the developer and canary builds even before the official release. https://twitter.com/h0x0d/status/1125607963282948096 Updates in Microsoft Edge for macOS The macOS preview build comes with essentially the same features as the Windows one but is tweaked according to macOS conventions. These updates include changes in the fonts, menus, keyboard shortcuts, title casing, and other areas. This macOS version has rounded corners for tabs, which Microsoft plans to bring to Windows as well. Microsoft has also taken advantage of the macOS hardware features to provide user experiences exclusive to macOS. The macOS exclusive features include website shortcuts, tab switching, and video controls via the Touch Bar. Users will be able to access these features through the familiar navigation with Mac trackpad gestures. Source: Microsoft With this release, Microsoft aims to provide web developers with a consistent platform across different operating systems. This version comes with support for Progressive Web Apps that you can debug using the browser developer tools. Microsoft in the announcement wrote, “For the first time, web developers can now test sites and web apps in Microsoft Edge on macOS and be confident that those experiences will work the same in the next version of Microsoft Edge across all platforms.” Microsoft Edge Insider Channels Similar to Windows 10, the macOS preview builds will be made available through three preview channels: Dev, Beta, and Canary, that are collectively called Microsoft Edge Insider Channels. Source: Microsoft Canary builds are the ones that will receive updates every night. Developer builds are much more stable than the Canary builds and will be updated weekly. Beta builds are the most stable ones when compared to the three and will receive updates every 6 weeks. Right now only the Canary Channel is open, from which you can download the canary builds of Microsoft Edge. Microsoft says that the Dev channel builds will be available “very soon” to run alongside the canary version. You can share your feedback with Microsoft via the “Send feedback” smiley. To know more in detail, visit Microsoft’s Blog. Microsoft makes the first preview builds of Chromium-based Edge available for testing Microsoft confirms replacing EdgeHTML with Chromium in Edge
Read more
  • 0
  • 0
  • 18280
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-node-js-and-js-foundation-announce-intent-to-merge-developers-have-mixed-feelings
Bhagyashree R
05 Oct 2018
3 min read
Save for later

Node.js and JS Foundation announce intent to merge; developers have mixed feelings

Bhagyashree R
05 Oct 2018
3 min read
Yesterday, the Linux Foundation announced that the Node.js Foundation and JS Foundation have agreed to possibly create a joint organization. Currently, they have not made any formal decisions regarding the organizational structure. They clarified that joining forces will not change the technical independence or autonomy for Node.js or any of the 28 JS Foundation projects such as Appium, ESLint, or jQuery. A Q&A session will be held at Node+JS Interactive from 7:30 am to 8:30 am PT, October 10 at West Ballroom A to answer questions and get community input on the possible structure of a new Foundation. Why are Node.js and JS Foundations considering merging? The idea of this possible merger came from a need for a tighter integration between both foundations to provide greater support for Node.js and a broader range of JavaScript projects. JavaScript is continuously evolving and being used for creating applications ranging from web, desktops, and mobile. This calls for increased collaboration in the JavaScript ecosystem to sustain continued and healthy growth. What are the goals of this merger? Following are few of the goals of this merge aimed at benefiting the broad Node.js and JavaScript communities: To provide enhanced operational excellence Streamlined member engagement Increased collaboration across the JavaScript ecosystem and affiliated standards bodies This “umbrella” project structure will bring stronger collaboration across all JavaScript projects With a single, clear home available for any project in the JavaScript ecosystem, projects won’t have to choose between the JS and Node.js ecosystems. Todd Moore, Node.js Board Chairperson and IBM VP Opentech, believes this merger will provide improved support to contributors: “The possibility of a combined Foundation and the synergistic enhancements this can bring to end users is exciting. Our ecosystem can only grow stronger and the Foundations ability to support the contributors to the many great projects involved improve as a result.” How are developers feeling about this potential move? Not many developers are happy about this merger, which led to a discussion on Hacker News yesterday. One of the developers feels that the JS Foundation has been neglecting their responsibility towards many open source projects. They have also seen a reduction in funding and alienated many long-time contributors. According to him, this step could be “a last-ditch effort to retain some sort of relevancy.” On the other hand, one of the developers feels positive about this merge: “The JS Foundation is already hosting a lot of popular projects that run in back-end and build/CI environments -- webpack, ESLint, Esprima, Grunt, Intern, JerryScript, Mocha, QUnit, NodeRed, webhint, WebDriverIO, etc. Adding Node.JS itself to the mix would seem to make a lot of sense.” What we think of this move? This merger, if it happens, could unify the fragmented Javascript ecosystem bringing some much-needed relief to developers. It could also bring together sponsor members of the likes Google, IBM, Intel, and others to support the huge number of JavaScript open source projects. We must add that we find this move as a reaction to the growing popularity of Python, Rust, and WebAssembly, all invading and challenging JavaScript as the preferred web development ecosystem. If you have any questions regarding the merger, you can submit them through this Google Form provided by the two foundations. Read the full announcement at the official website of The Linux Foundation and also check out the announcement by Node.js on Medium. Node.js announces security updates for all their active release lines for August 2018 Why use JavaScript for machine learning? The top 5 reasons why Node.js could topple Java
Read more
  • 0
  • 0
  • 18266

article-image-azure-devops-outage-root-cause-analysis-starring-greedy-threads-and-rogue-scale-units
Prasad Ramesh
19 Oct 2018
4 min read
Save for later

Azure DevOps outage root cause analysis starring greedy threads and rogue scale units

Prasad Ramesh
19 Oct 2018
4 min read
Azure DevOps suffered several outages earlier this month. Microsoft has done a root cause analysis to find the causes. This is after Azure cloud was affected by the environment last month. Incidents on October 3, 4 and 8 It started on October 3 with a networking issue in the North Central US region lasting over an hour. It happened again the following day which lasted an hour. On following up with the Azure networking team, it was found that there were no networking issues when the outages happened. Another incident happened on October 8. They realized that something was fundamentally wrong which is when an analysis on telemetry was done. The issue was not found after this. After the third incident, it was found that the thread count on the machine continued to rise. This was an indication that some activity was going on even with no load coming to the machine. It was found that all 1202 threads had the same call stack, the following being the key call. Server.DistributedTaskResourceService.SetAgentOnline Agent machines send a heartbeat signal every minute to the service to notify being online. On no signal from an agent over a minute it is marked offline and the agent needs to reconnect to signal. The agent machines were marked offline in this case and eventually, they succeeded after retries. On success, the agent was stored in an in-memory list. Potentially thousands of agents were reconnecting at a time. In addition, there was a cause for threads to get full with messages since asynchronous call patterns were adopted recently. The .NET message queue stores a queue of messages to process and maintains a thread pool where. As a thread becomes available, it will service the next message in queue. Source: Microsoft The thread pool, in this case, was smaller than the queue. For N threads, N messages are processed simultaneously. When an async call is made, the same message queue is used and it queues up a new message to complete the async call in order to read the value. This call is at the end of the queue while all the threads are occupied processing other messages. Hence, the call will not complete until the other previous messages have completed, tying up one thread. The process comes to a standstill when N messages are processed where N also equals to the number of threads. At this state, an device can no longer process requests causing the load balancer to take it out of rotation. Hence the outage. An immediate fix was to conditionalize this code so no more async calls were made. This was done as the pool providers feature isn’t in effect yet. Incident on October 10 On October 10, an incident with a 15-minute impact took place. The initial problem was the result of a spike in slow response times from SPS. It was ultimately caused by problems in one of the databases. A Team Foundation Server (TFS) put pressure on SPS, their authentication service. On deploying TFS, sets of scale units called deployment rings are also deployed. When the deployment for a scale unit completes, it puts extra pressure on SPS. There are built-in delays between scale units to accommodate the extra load. There is also sharding going on in SPS to break it into multiple scale units. These factors together caused a trip in the circuit breakers, in the database. This led to slow response times and failed calls. This was mitigated by manually recycling the unhealthy scale units. For more details and complete analysis, visit the Microsoft website. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary. Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 18262

article-image-city-power-johannesburg-hit-by-a-ransomware-attack-that-encrypted-all-its-databases-applications-and-network
Savia Lobo
26 Jul 2019
4 min read
Save for later

‘City Power Johannesburg’ hit by a ransomware attack that encrypted all its databases, applications and network

Savia Lobo
26 Jul 2019
4 min read
Yesterday, a ransomware virus affected City Power Johannesburg, the electricity distributor for some parts of South Africa’s capital city. City Power notified citizens via Twitter that the virus has encrypted all its databases, applications and network and that the ICT team is trying to fix the issue. https://twitter.com/CityPowerJhb/status/1154277777950093313 Due to the attack, City Power’s website was restraining users from lodging a complaint or purchasing pre-paid electricity. https://twitter.com/CityPowerJhb/status/1154278402003804160 The city municipality, owners of the City Power, tweeted, it also “affected our response time to logged calls as some of the internal systems to dispatch and order material have been slowed by the impact”. Chris Baraniuk, a freelance science and technology journalist, tweeted, “The firm tells me more than 250,000 people would have had trouble paying for pre-paid electricity, potentially leaving them cut off”. City Power hasn’t yet released information on the scale of the impact. The ransomware attack occurs amidst existing power outages According to iAfrikan, the ransomware attack struck the city while it was “experiencing a strain on the power grid due to increased use of electricity during Johannesburg's recent cold winter weather”. The strain on the grid has resulted in multiple power outages in different parts of the city. According to Bleeping Computers, Business Insider South Africa reported that an automated voice message on City Power's phone helpline said, "Dear customers, please note that we are currently experiencing a problem with our prepaid vending system. We are working on this issue and hope to have it resolved by one o'clock today (25 July 2019)". The city municipality tweeted yesterday, “most of the IT applications and networks that were affected by the cyberattack have been cleaned up and restored.” The municipality apologized for their inconvenience and assured the customers that none of their details were compromised. https://twitter.com/CityPowerJhb/status/1154626973056012288 Many users have raised requests tagging the municipality and the electricity distribution board on Twitter. City Power replied, “Technicians will be dispatched to investigate and work on restorations”. Later it tweeted asking them to cancel their request and that the power had been restored. https://twitter.com/GregKee/status/1154397914191540225 A recent tweet today at 10:47 am (SAST) from the City Power says, “Electricity supply points to be treated as live at all times as power can be restored anytime. City Power regrets any inconvenience that may be caused by the interruption”. https://twitter.com/CityPowerJhb/status/1154674533367988224 Luckily, City Power Johannesburg escaped from paying a ransom Ransomware attack blocks the company’s or individual’s system until a huge ransom--in a credit or in Bitcoin--is paid to the attackers to relieve their systems. According to Business Insider South Africa, attackers usually convert the whole information with the databases into “gibberish, intelligible only to those with the right encryption key. Attackers then offer to sell that key to the victim, allowing for the swift reversal of the damage”. There have been many instances in this year and Johannesburg has been lucky enough to escape from paying a huge ransom. Early this month, a Ryuk ransomware attack encrypted Lake City’s IT network in the United States and the officials had to approve a huge payment of nearly $500,000 to restore operations. Similarly, Jackson County officials in Georgia, USA, paid $400,000 to cyber-criminals to resolve a ransomware infection. Also, La Porte County, Indiana, US, paid $130,000 to recover data from its encrypted computer systems. According to The Next Web, the “ever-growing list of ransomware attacks has prompted the United States Conference of Mayors to rule that they would not pay ransomware demands moving forward.” Jim Trainor, who formerly led the Cyber Division at FBI Headquarters and is now a senior vice president in the Cyber Solutions Group at risk management and insurance brokerage firm Aon, told CSO, “I would highly encourage a victim of a ransomware attack to work with the FBI and report the incident”. The FBI “strongly encourages businesses to contact their local FBI field office upon discovery of a ransomware infection and to file a detailed complaint at www.ic3.gov”. Maintaining good security habits is the best way to deal with ransomware attacks, according to the FBI. “The best approach is to focus on defense-in-depth and have several layers of security as there is no single method to prevent compromise or exploitation,” they tell CSO. To know more about the City Power Johannesburg ransomware attack in detail, head over to The Bleeping Computer’s coverage. Microsoft releases security updates: a “wormable” threat similar to WannaCry ransomware discovered Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Anatomy of a Crypto Ransomware
Read more
  • 0
  • 0
  • 18261

article-image-wewontbuildit-amazon-workers-demand-company-to-stop-working-with-palantir-and-take-a-stand-against-ice
Fatema Patrawala
30 Jul 2019
4 min read
Save for later

#WeWontBuildIt: Amazon workers demand company to stop working with Palantir and take a stand against ICE

Fatema Patrawala
30 Jul 2019
4 min read
On Monday, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Palantir is a data analytics company, founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, has a strong association with the Immigration and Customs Enforcement (ICE). https://twitter.com/WeWontBuildIt/status/1155872860742664194 Last year in June, an alliance of more than 500 Amazon employees had signed a petition addressing to CEO Jeff Bezos and AWS head Andy Jassy to abandon its contracts with government agencies. It seems that those protests are ramping up again. The email sent to employee mailing lists within Amazon Web Services demanded that Palantir to be removed from Amazon’s cloud for violating its terms of service. It also called on Amazon to take a stand against ICE by making a statement establishing its position against immigration raids, deportations and camps for migrants at the border. They have also demanded to stop selling its facial recognition tech to the government agencies. https://twitter.com/WeWontBuildIt/status/1155872862055485441 In May, Amazon shareholders had rejected the proposal to ban the sale of its facial recognition tech to government. With this they had also rejected eleven other proposals made by employees including a climate resolution, salary transparency and other issues. "The world is watching the abuses in ICE's concentration camps unfold. We know that our company should, and can do better,” the email read. The protests broke out at Amazon’s AWS Summit, held in New York, last week on Thursday. As Amazon CTO Werner Vogels gave a presentation, a group led by a man identified in a tweet as a tech worker interrupted to protest Amazon ties with ICE. https://twitter.com/altochulo/status/1149305189800775680 https://twitter.com/MaketheRoadNY/status/1149306940377448449 Vogels was caught off guard by the protests but continued on about the specifics of AWS, according to ZDNet. “I’m more than willing to have a conversation, but maybe they should let me finish first,” Vogels said amidst protesters, whose audio was cut off on Amazon’s official livestream of the event, per ZDNet. “We’ll all get our voices heard,” he said before returning to his planned speech. According to Business Insider reports, Palantir has a $51 million contract with ICE, which entails providing software to gather data on undocumented immigrant’s employment information, phone records, immigration history and similar information. Its software is hosted in the AWS cloud. The email states that Palantir enables ICE to violate the rights of others and working with such a company is harmful to Amazon’s reputation. The employees also state that their protest is in the spirit of similar actions at companies including Wayfair, Microsoft and Salesforce where workers have protested against their employers to cut ties with ICE and US Customs and Border Protection (CBP). Amazon has been facing increasing pressure from its employees. Last week workers had protested on Amazon Prime day demanding a safe working conditions and fair wages. Amazon, which typically takes a cursory view of such employee outcry, has so far given no indication that it will reconsider providing services to Palantir and other law enforcement agencies. Instead the company argued that the government should determine what constitutes “acceptable use” of technology of the type it sells. “As we’ve said many times and continue to believe strongly, companies and government organizations need to use existing and new technology responsibly and lawfully,” Amazon said to BuzzFeed News. “There is clearly a need for more clarity from governments on what is acceptable use of AI and ramifications for its misuse, and we’ve provided a proposed legislative framework for this. We remain eager for the government to provide this additional clarity and legislation, and will continue to offer our ideas and specific suggestions.” Other tech worker groups like Google Walkout For Real Change, Ban Google for Pride stand in solidarity with Amazon workers on this protest. https://twitter.com/GoogleWalkout/status/1155976287803998210 https://twitter.com/NoPrideForGoog/status/1155906615930806276 #TechWontBuildIt: Entropic maintainer calls for a ban on Palantir employees contributing to the project and asks other open source communities to take a stand on ethical grounds Amazon workers protest on its Prime day, demand a safe work environment and fair wages Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact
Read more
  • 0
  • 0
  • 18258
article-image-wikipedia-hit-by-massive-ddos-distributed-denial-of-service-attack-goes-offline-in-many-countries
Savia Lobo
09 Sep 2019
3 min read
Save for later

Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries

Savia Lobo
09 Sep 2019
3 min read
Two days ago, on September 7, Wikipedia confirmed with an official statement that it was hit by a malicious attack a day before causing it to go offline in many countries at irregular intervals. The “free online encyclopedia” said the attack was ongoing and the Site Reliability Engineering team is working to curb the attack and restore access to the site. According to downdetector, users across Europe and parts of the Middle East experienced outages shortly before 7pm, BST on September 6. Also Read: Four versions of Wikipedia goes offline in a protest against EU copyright Directive which will affect free speech online The UK was one of the first countries that reported a slow and choppy use of the site. This was followed by reports of the site then being down in several other European countries, including Poland, France, Germany, and Italy. Source: Downdetector.com By Friday evening, 8.30 pm (ET), the attack extended to an almost-total outage in the United States and other countries. During this time, there was no spokesperson available for comment at the Wikimedia Foundation. https://twitter.com/netblocks/status/1170157756579504128 On September 6, at 20:53 (UTC) Wikimedia Germany then informed users by tweeting that a “massive and very” broad DDoS (Distributed Denial of Service) attack on the Wikimedia Foundation servers, making the website impossible to access for many users. https://twitter.com/WikimediaDE/status/1170077481447186432 The official statement on the Wikimedia foundation reads, “We condemn these sorts of attacks. They’re not just about taking Wikipedia offline. Takedown attacks threaten everyone’s fundamental rights to freely access and share information. We in the Wikimedia movement and Foundation are committed to protecting these rights for everyone.” Cybersecurity researcher, Baptiste Robert, with the online name Elliot Anderson wrote on Twitter, “A new skids band is in town. @UKDrillas claimed they are behind the DDOS attack of Wikipedia. You’ll never learn... Bragging on Twitter (or elsewhere) is the best way to get caught. I hope you run fast.” https://twitter.com/fs0c131y/status/1170093562878472194 https://twitter.com/atoonk/status/1170400761722724354 To know about this news in detail, read Wikipedia’s official statement. Other interesting news in Security “Developers need to say no” – Elliot Alderson on the FaceApp controversy in a BONUS podcast episode [Podcast] CircleCI reports of a security breach and malicious database in a third-party vendor account Hundreds of millions of Facebook users’ phone numbers found online, thanks to an exposed server, TechCrunch reports
Read more
  • 0
  • 0
  • 18256

article-image-zone-redundancy-for-azure-cache-for-redis-now-in-preview-from-microsoft-azure-blog-announcements
Matthew Emerick
14 Oct 2020
3 min read
Save for later

Zone Redundancy for Azure Cache for Redis now in preview from Microsoft Azure Blog > Announcements

Matthew Emerick
14 Oct 2020
3 min read
Between waves of pandemics, hurricanes, and wildfires, you don’t need cloud infrastructure adding to your list of worries this year. Fortunately, there has never been a better time to ensure your Azure deployments stay resilient. Availability zones are one of the best ways to mitigate risks from outages and disasters. With that in mind, we are announcing the preview for zone redundancy in Azure Cache for Redis. Availability Zones on Azure Azure Availability Zones are geographically isolated datacenter locations within an Azure region, providing redundant power, cooling, and networking. By maintaining a physically separate set of resources with the low latency from remaining in the same region, Azure Availability Zones provide a high availability solution that is crucial for businesses requiring resiliency and business continuity. Redundancy options in Azure Cache for Redis Azure Cache for Redis is increasingly becoming critical to our customers’ data infrastructure. As a fully managed service, Azure Cache for Redis provides various high availability options. By default, caches in the standard or premium tier have built-in replication with a two-node configuration—a primary and a replica hosting two identical copies of your data. New in preview, Azure Cache for Redis can now support up to four nodes in a cache distributed across multiple availability zones. This update can significantly enhance the availability of your Azure Cache for Redis instance, giving you greater peace of mind and hardening your data architecture against unexpected disruption. High Availability for Azure Cache for Redis The new redundancy features deliver better reliability and resiliency. First, this update expands the total number of replicas you can create. You can now implement up to three replica nodes in addition to the primary node. Having more replicas generally improves resiliency (even if they are in the same availability zone) because of the additional nodes backing up the primary. Even with more replicas, a datacenter-wide outage can still disrupt your application. That’s why we’re also enabling zone redundancy, allowing replicas to be located in different availability zones. Replica nodes can be placed in one or multiple availability zones, with failover automatically occurring if needed across availability zones. With Zone Redundancy, your cache can handle situations where the primary zone is knocked offline due to issues like floods, power outages, or even natural disasters. This increases availability while maintaining the low latency required from a cache. Zone redundancy is currently only available on the premium tier of Azure Cache for Redis, but it will also be available on the enterprise and enterprise flash tiers when the preview is released. Industry-leading service level agreement Azure Cache for Redis already offers an industry-standard 99.9 percent service level agreement (SLA). With the addition of zone redundancy, the availability increases to a 99.95 percent level, allowing you to meet your availability needs while keeping your application nimble and scalable. Adding zone redundancy to Azure Cache for Redis is a great way to promote availability and peace of mind during turbulent situations. Learn more in our documentation and give it a try today. If you have any questions or feedback, please contact us at AzureCache@microsoft.com.
Read more
  • 0
  • 0
  • 18250

article-image-mapbox-introduces-martini-a-client-side-terrain-mesh-generation-code
Vincy Davis
16 Aug 2019
3 min read
Save for later

Mapbox introduces MARTINI, a client-side terrain mesh generation code

Vincy Davis
16 Aug 2019
3 min read
Two days ago, Vladimir Agafonkin, an engineer at Mapbox introduced a client-side terrain mesh generation code, called MARTINI, short for ‘Mapbox's Awesome Right-Triangulated Irregular Networks Improved’. It uses a Right-Triangulated Irregular Networks (RTIN) mesh, which consists of big right-angle triangles to render smooth and detailed terrain in 3D. RTIN has two advantages such as: The algorithm generates a hierarchy of all approximations of varying precision, thus enabling quick retrieving. It is very fast making it feasible for client-side meshing from raster terrain tiles. In a blog post, Agafonkin demonstrates a drag and zoom terrain visualization for users to adjust mesh precision in real time. The terrain visualization also displays the number of triangles generated with an error rate. Image Source: Observable How does the RTIN Hierarchy work Mapbox's MARTINI uses the RTIN algorithm which has a size of (2k+1) x (2k+1) grids, “that's why we add 1-pixel borders on the right and bottom”, says Agafonkin. The RTIN algorithm initiates an error map where a grid of error values guides the following mesh retrieval. The error map indicates the user if a certain triangle has to be split or not, by taking the height error value into account. The RTIN algorithm first calculates the error approximation of the smallest triangles, which is then propagated to the parent triangles. This process is repeated until the top two triangles’ errors are calculated and a full error map is produced. This process results in zero T-junctions and thus no gaps in the mesh. Image Source: Observable For retrieving a mesh, RTIN hierarchy starts with two big triangles, which is then subdivided to approximate, according to the error map. Agafonkin says, “This is essentially a depth-first search in an implicit binary tree of triangles, and takes O(numTriangles) steps, resulting in nearly instant mesh generation.” Users have appreciated the Mapbox's MARTINI demo and animation presented by Agafonkin in the blog post. A user on Hacker News says, “This is a wonderful presentation of results and code, well done! Very nice to read.” Another user comments, “Fantastic. Love the demo and the animation.” Another comment on Hacker News reads, “This was a pleasure to play with on an iPad. Excellent work.” For more details on the code and algorithm used in Mapbox's MARTINI, check out Agafonkin’s blog post. Introducing Qwant Maps: an open source and privacy-preserving maps, with exclusive control over geolocated data Top 7 libraries for geospatial analysis Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data
Read more
  • 0
  • 0
  • 18245
article-image-the-electron-team-publicly-shares-the-release-timeline-for-electron-5-0
Bhagyashree R
06 Feb 2019
2 min read
Save for later

The Electron team publicly shares the release timeline for Electron 5.0

Bhagyashree R
06 Feb 2019
2 min read
Last week, the team behind Electron announced that they will share the release timeline for Electron 5.0 and beyond publicly. For now, they have posted the schedule for Electron 5.0, which will include M72 and Node 12.0. Here’s the schedule the team has made public, which can still have some changes: Source: Electron The Electron team has been working towards making its release cycles faster and more stable. In December last year, they planned to release a new version of Electron in cadence with the new versions of its major components including Chromium, V8, and Node. They started working according to this plan and have seen some success. Source: Electron The team was able to release last two versions of Electron (3.0 and 4.0) almost parallelly with Chromium’s latest versions. To keep up with Chromium releases, the two versions were released with a 2-3 month timeline for each release. They will be continuing this pattern for Electron 5.0 and beyond. So, for now, developers can expect a major Electron release approximately every quarter. Read the timeline shared by the Electron team on their official website. Flutter challenges Electron, soon to release a desktop client to accelerate mobile development Electron 3.0.0 releases with experimental textfield, and button APIs Electron Fiddle: A ‘code playground’ for experimenting with cross-platform native apps
Read more
  • 0
  • 0
  • 18242

article-image-lilocked-ransomware-lilu-affects-thousands-of-linux-based-servers
Amrata Joshi
13 Sep 2019
3 min read
Save for later

Lilocked ransomware (Lilu) affects thousands of Linux-based servers

Amrata Joshi
13 Sep 2019
3 min read
A ransomware strain named Lilocked or Lilu has been affecting thousands of Linux-based servers all over the world since mid-July and the attacks got intensified by the end of August, ZDNet reports.  Lilocked ransomware’s first case got noticed when Micheal Gillespie, a malware researcher uploaded a ransomware note on the website, ID Ransomware. This website is used for identifying the name of ransomware from the ransomware note or from the demand specified in the attack. It is still unknown as to how the servers have been breached. https://twitter.com/demonslay335/status/1152593459225739265 According to a thread on a Russian-speaking forum, attackers might be targeting those systems that are running outdated Exim (email) software. The forum also mentions that the ransomware managed to get root access to servers by “unknown means”. Read Also: Exim patches a major security bug found in all versions that left millions of Exim servers vulnerable to security attacks Lilocked doesn't encrypt system files, but it encrypts a small subset of file extensions, such as JS, CSS, HTML, SHTML, PHP, INI, and other image file formats so the infected servers are running normally. As per the French security researcher, Benkow, Lilocked has encrypted more than 6,700 servers, out of which many have been indexed and cached in Google search results. However, the number of affected servers is much higher. “Not all Linux systems run web servers, and there are many other infected systems that haven't been indexed in Google search results,” ZDNet reports. It is easy to identify the servers that have been affected by the ransomware as most of their files are encrypted and they sport a new ".lilocked" file extension. Image Source: ZDNet Read Also: Exim patches a major security bug found in all versions that left millions of Exim servers vulnerable to security attacks The victims are first redirected to a portal on the dark web, where they are asked to enter a key from the ransom note and later are notified that their data has been encrypted. The victims are then asked to transfer 0.03 bitcoin, which is around $325. https://twitter.com/dulenkp/status/1170091139510218752 https://twitter.com/Zanket_com/status/1171089344460972032 To know more about the Lilocked ransomware in detail, head over to ZDNet. Other interesting news in security Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Endpoint protection, hardening, and containment strategies for ransomware attack protection: CISA recommended FireEye report Highlights StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security
Read more
  • 0
  • 0
  • 18227
Modal Close icon
Modal Close icon