Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-windows-launches-progressive-web-apps
Richard Gall
09 Apr 2018
2 min read
Save for later

Windows launches progressive web apps... that don't yet work on mobile

Richard Gall
09 Apr 2018
2 min read
Progressive web apps are now available on the Microsoft store. But just when you thought Microsoft was taking a step to plug the 'app gap' and catch up with the competition... This first wave of progressive web apps won't actually work on Windows mobile. One of the central problems with the new Windows progressive web apps is that they do not have service workers implemented for Edge mobile - that means they aren't able to send push notifications. This is bad news generally for the Windows 10 mobile platform. It's possible that Microsoft might add further updates for progressive web apps on mobile, but it nevertheless sends signals that Microsoft just doesn't have the hunger to commit to their mobile project. As we've seen just a few days ago, the company more broadly appears to be moving towards infrastructure and cloud projects. The issues around progressive web apps might well just be symptomatic of this broader organizational shift. For TechRadar, this is a misstep by Microsoft. "There’s very little evidence out there that Microsoft is willing to put in the massive effort needed to get back on terms with iOS and Android devices, even in the enterprise sector, so the future doesn’t look too rosy at the moment." However, while disappointment is understandable, there's a chance that these issues will be corrected. It wouldn't actually take that much for Microsoft to fix the problem. Development teams could then deploy updates to their respective applications pretty easily, without having to go through the rigmarole of submitting to the app store once again. The list of companies who have PWAs available are, we should note, pretty impressive. It's clear that some big players in a number of different fields want to get involved: Skyscanner Asos Ziprecruiter Oyster StudentDoctorNetwork What this means for the future of Windows mobile isn't clear. It certainly doesn't look great from Microsoft's perspective, and you could say this has been a bit of a missed opportunity. But all is not lost, and they could quickly recover to use PWAs to redefine the future of its mobile offering. Check out other latest news: Verizon launches AR Designer, a new tool for developers Leap Motion open sources its $100 augmented reality headset, North Star
Read more
  • 0
  • 0
  • 16698

article-image-cloudflare-adds-warp-a-free-vpn-to-1-1-1-1-dns-app-to-improve-internet-performance-and-security
Natasha Mathur
02 Apr 2019
3 min read
Save for later

Cloudflare adds Warp, a free VPN to 1.1.1.1 DNS app to improve internet performance and security

Natasha Mathur
02 Apr 2019
3 min read
Cloudflare announced yesterday that it is adding Warp, a free VPN to the 1.1.1.1 DNS resolver app. Cloudflare team states that it began its plans to integrate 1.1.1.1 app with warp performance and security tech, about two years ago. The 1.1.1.1 app was released in November last year for iOS and Android. The mobile app included features such as VPN support that helped move the mobile traffic towards 1.1.1.1 DNS servers, thereby, helping improve speeds. Now with warp integration, 1.1.1.1 app will speed up mobile data using Cloudflare network to resolve DNS queries at a faster pace.  With Warp, all the unencrypted connections are encrypted automatically by default. Also, Warp comes with end-to-end encryption and doesn’t require users to install a root certificate to observe the encrypted Internet traffic. For cases when you browse the unencrypted Internet through Warp, Cloudflare’s network can cache and compress content to improve performance and decrease your data usage and mobile carrier bill. “In the 1.1.1.1 App, if users decide to enable Warp, instead of just DNS queries being secured and optimized, all Internet traffic is secured and optimized. In other words, Warp is the VPN for people who don't know what V.P.N. stands for”, states the Cloudflare team. Apart from that, Warp also offers excellent performance and reliability. Warp is built around a UDP-based protocol that has been optimized for the mobile Internet. Warp also makes use of Cloudflare’s massive global network and allows Warp to connect with servers within milliseconds. Moreover, Warp has been tested to show that it increases internet performance. Another factor is reliability which has also significantly improved. Warp is not as capable of eliminating mobile dead spots, but it is very efficient at recovering from loss. Warp doesn’t increase your battery usage as it is built around WireGuard, a new and efficient VPN protocol. The basic version of Warp has been added as a free option with the 1.1.1.1 app for free. However, Cloudflare team will be charging for Warp+, a premium version of Warp, that will be even faster with Argo technology. A low monthly fee will be charged for Warp+ that will vary based on different regions. Also, the 1.1.1.1 App with Warp will have all the privacy protections launched formerly with the 1.1.1.1 app. Cloudflare team states that 1.1.1.1 app with warp is still under works, and although sign-ups for Warp aren’t open yet, Cloudflare has started a waiting list where you can “claim your place” by downloading the 1.1.1.1 app or by updating the existing app. Once the service is available, you’ll be notified. “Our whole team is proud that today, for the first time, we’ve extended the scope of that mission meaningfully to the billions of other people who use the Internet every day”, states the Cloudflare team. For more information, check out the official Warp blog post. Cloudflare takes a step towards transparency by expanding its government warrant canaries Cloudflare raises $150M with Franklin Templeton leading the latest round of funding workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice
Read more
  • 0
  • 0
  • 16687

article-image-splunk-leverages-ai-in-its-monitoring-tools
Richard Gall
30 Apr 2018
2 min read
Save for later

Splunk leverages AI in its monitoring tools

Richard Gall
30 Apr 2018
2 min read
Just weeks after the announcement of Splunk IAI (Industrial Asset Intelligence), Splunk has revealed it will be enhancing machine learning across many of its products. This includes Splunk Enterprise, IT Service Intelligence, and User Behavior Analytics. Clearly, the company are using Spring 2018 as a period to build a solid foundation to future-proof their products. Splunk has also added an 'Experiment Management Interface' to its Machine Learning Toolkit. This is a crucial update that will make tracking machine learning and AI 'experiments' much easier. It means that monitoring a range of issues will become much easier. Splunk's goal here is to ensure a reduction in what it calls "event noise." The machine learning and AI algorithms will help to cut through the amount of data and information at users' disposal. It will allow them to identify the issues that are most business-critical. It's about more than just analytics - it's about the additional dimension that makes prioritization much more straightforward. That's what distinguishes what Splunk are doing compared to competitors. Typically, machine learning in BI software allows users to monitor issues, but doesn't have the capacity to place issues in a wider business context. There are a wide range of applications for this technology. It could be used to identify security issues within a given system, application performance, or even operational management. Tim Tully, CTO, had this to say: "Our latest wave of innovation is intended to arm customers with the tools needed to translate AI into actionable intelligence. While AI and machine learning often seem like unattainable and expensive pipe dreams, Splunk Cloud and Splunk Enterprise now make it easier and more affordable to monitor, analyze and visualize machine data in real time" Of course, while Tully's words contain an element of marketing-speak, made for a press release, it's worth noting that the goal here from Splunk's perspective is all about making AI and machine learning more accessible. Clearly the company knows what their customers want. This suggests, then, that for all the discussion around the machine learning revolution, there are still many businesses that regard machine learning as a considerable challenge.
Read more
  • 0
  • 0
  • 16684

article-image-firefox-67-will-come-with-faster-and-reliable-javascript-debugging-tools
Bhagyashree R
20 May 2019
3 min read
Save for later

Firefox 67 will come with faster and reliable JavaScript debugging tools

Bhagyashree R
20 May 2019
3 min read
Last week, the Firefox DevTools Debugger team shared the recent updates in Firefox DevTools to make debugging of modern apps more consistent. They have also worked on making the debugger more predictable and capable of understanding common tools in web development like webpack, Babel, and TypeScript. These updates are ready for trying out in Firefox 67, which is planned to be released tomorrow (May 21). The team also shared that Firefox 68 will come with a more “polished” version of these features. https://twitter.com/FirefoxDevTools/status/1129066199017353216 Today, every browser comes with a powerful suite of developer tools that allows you to easily inspect and debug your web applications. These tools enable you to do a bunch of things like inspecting currently-loaded JavaScript, editing pages on-the-fly, quickly diagnosing problems, and more. The Firefox team has introduced many improvements and updates to these tools and here are some of the highlights: Revamped source map support Source maps provide a way to keep your client-side code readable and debuggable even after combining and minifying it. The new debugger comes with revamped support for source maps that now “perfects the illusion that you’re debugging your code, not the compiled output from Babel, Webpack, TypeScript, vue.js, etc.” To help developers generate correct source maps, the team and the community has contributed patches to build tools like Babel, a JavaScript compiler and configurable transpiler. Predictable breakpoints for effortless pausing and stepping This improved debugger architecture solves several issues that developers were commonly facing like lost breakpoints, pausing in the wrong script, or stepping through pretty-printed code. Now, they will also be able to easily debug minified scripts, arrow functions, and chained method calls with the help of inline breakpoints. Console debugging with logpoints Developers often resort to console logging (using console.log statements for printing messages to the console) when they want to quickly observe their program’s flow without having to pause the execution. However, this way of debugging can become quite tedious. This is why starting from Firefox 67, developers will have a new breakpoint called ‘logpoint’ that dynamically injects ‘console.log()’ statements into your running application. Better debugging for JavaScript Workers A web worker is a script that runs in the background without having any effect on the main execution thread of a web application. It takes care of all the laborious processing allowing the main thread to run without being slowed down. Firefox will now come with an updated Threads panel through which you will be able to switch between contexts and also independently pause different execution contexts. This will allow workers and their scripts to be debugged within the same Debugger panel. These were some of the highlights from the long list of updates and improvements. Check out the official announcement by Mozilla to know more in detail. Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta Mozilla is exploring ways to reduce notification permission prompt spam in Firefox
Read more
  • 0
  • 0
  • 16673

article-image-is-it-time-to-ditch-chrome-ad-blocking-extensions-will-now-only-be-for-enterprise-users
Sugandha Lahoti
03 Jun 2019
6 min read
Save for later

Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users

Sugandha Lahoti
03 Jun 2019
6 min read
Update: Opera, Brave, Vivaldi have ignored Chrome's anti-ad-blocker changes, despite shared codebase. On June 12, Google published a blog post clarifying it's intentions with ad blocking extension system saying it isn't trying to kill ad blockers. "This has been a controversial change since the Web Request API is used by many popular extensions, including ad blockers. We are not preventing the development of ad blockers or stopping users from blocking ads. Instead, we want to help developers, including content blockers, write extensions in a way that protects users’ privacy." In January, Chrome updated its Manifest V3 extension system that could lead to crippling all ad blockers. Even though Chrome’s Manifest extension system received overwhelmingly negative feedback, Google is standing firm on Chrome’s ad blocking changes. Last week, the company shared a statement on Google groups that current ad blocking capabilities will not be changed. Chrome will still have the capability to block unwanted content, but this will be restricted to only paid, enterprise users of Chrome. “Chrome is deprecating the blocking capabilities of the webRequest API in Manifest V3, not the entire webRequest API (though blocking will still be available to enterprise deployments).” What is this Manifest v3 controversy? Google developers have introduced an alternative to the webRequest API named the declarativeRequest API, which limits the blocking version of the webRequest API. declarativeNetRequest is a less effective, rules-based system. Chrome currently imposes a limit of 30,000 rules, However, most popular ad blocking rules lists use almost 75,000 rules. Although Google claimed that they’re looking to increase this number, they didn’t assure it. “We are planning to raise these values but we won’t have updated numbers until we can run performance tests to find a good upper bound that will work across all supported devices.” According to Manifest V3, the declarativeNetRequest API will be treated as the primary content-blocking API in extensions. Chrome developers listed two reasons behind this new update, one was performance and the other was better privacy guarantee to users. What this API does is, allow extensions to tell Chrome what to do with a given request, rather than have Chrome forward the request to the extension. This allows Chrome to handle a request synchronously. Regarding performance upgrade, however, a study was published on WhoTracks.me who analyzed the network performance of the most commonly used ad blockers: uBlock Origin, Adblock Plus, Brave, DuckDuckGo and Cliqz’z Ghostery. The study revealed that these content-blockers, except DuckDuckGo, have only sub-millisecond median decision time per request. This small amount of time will not have any overhead noticeable by users. Additionally, the efficiency of content blockers is continuously being improved with innovative approaches or with the help of technologies like WebAssembly. A uBlock maintainer had earlier reported an issue on the Chromium bug tracker for this feature: “If this (quite limited) declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two content blockers I have maintained for years, uBlock Origin (“uBO”) and uMatrix, can no longer exist.” In their update, Google wrote that appropriately permissioned extensions will still be able to observe network requests using the webRequest API, which he insisted is "foundational for extensions that modify their behavior based on the patterns they observe at runtime." Now, the lead developer of uBlock Origin, Raymond Hill has commented on the situation. Losing the ability to block content with the webRequest API is his main concern. "This breaks uBlock Origin and uMatrix, [which] are incompatible with the basic matching algorithm [Google] picked, ostensibly designed to enforce EasyList-like filter lists," he explained in an email to The Register. "A blocking webRequest API allows open-ended content blocker designs, not restricted to a specific design and limits dictated by the same company which states that content blockers are a threat to its business." He also called out Google’s business model on uBlock Origin’s GitHub. “The blocking ability of the webRequest API caused Google to yield control of content blocking to content blockers. Now that Google Chrome is the dominant browser, it is in a better position to shift the optimal point between the two goals which benefits Google's primary business. The deprecation of the blocking ability of the webRequest API is to gain back this control, and to further now instrument and report how web pages are filtered since now the exact filters which are applied to web page is information which will be collectable by Google Chrome.” For a number of web users, this was the last straw. Many said they'd be moving on from Chrome to other privacy-friendly browsers. A comment reads, “If you use an iOS device, Safari is awesome. The integration between all your hardware devices syncing passwords, tabs, bookmarks, reading list, etc. kicks ass. That’s all not to mention its excellent built-in privacy features and that it’s really really fast.” Another comment reads, “I used to have Firefox. When I heard that even Microsoft was going to use chromium I realized, Firefox is literally the last front ! I installed Firefox and started using it as my main browser.” Another says, “Genuinely, most people are choosing between privacy and convenience. And with Firefox you don't need to choose.” Mozilla’s Firefox has taken this opportunity to attract Chrome users with a new page detailing how to Switch from Chrome to Firefox. “Switching to Firefox is fast, easy and risk-free. Firefox imports your bookmarks, autofill, passwords and preferences from Chrome.” The latest Firefox release also comes with a new feature that can help users block fingerprinting coming from ad trackers. The brave browser also tweeted about Chrome’s development, stating it will block ads regardless of Chrome’s decisions. https://twitter.com/brave/status/1134182650615173120 Users also appreciated Brave’s privacy features. https://twitter.com/jenzhuscott/status/1134035348240109568 Chrome software security engineer Chris Palmer took to Twitter to claim the move was intended to help improve the end-user browsing experience, and paid enterprise users would be exempt from the changes. https://twitter.com/fugueish/status/1133851275794059265 Chrome security leader Justin Schuh also said the changes were driven by privacy and security concerns. https://twitter.com/justinschuh/status/1134092257190064128 Top browsers, Opera, Brave, Vivaldi have ignored Chrome's anti-ad-blocker changes, despite having a shared codebase. https://twitter.com/opera/status/1137717494733508609 https://twitter.com/brave/status/1134182650615173120 https://twitter.com/vivaldibrowser/status/1136204715786719232 Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument. Flutter gets new set of lint rules to build better Chrome OS apps Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end
Read more
  • 0
  • 0
  • 16667

article-image-openai-reinforcement-learning-giving-robots-human-like-dexterity
Sugandha Lahoti
31 Jul 2018
3 min read
Save for later

OpenAI builds reinforcement learning based system giving robots human like dexterity

Sugandha Lahoti
31 Jul 2018
3 min read
Researchers at OpenAI have developed a system trained with reinforcement learning algorithms which is dexterous in-hand manipulation. Termed as Dactyl, this system can solve object orientation tasks entirely in a simulation without any human input. After the system’s training phase, it was able to work on a real robot without any fine-tuning. Using humanoid hand systems to manipulate objects has been a long-standing challenge in robotic control. Current techniques remain limited in their ability to manipulate objects in the real world. Although robotic hands have been available for quite some time, they were largely unable to utilize complex end-effectors to perform dexterous manipulation tasks. The Shadow Dexterous Hand, for instance, has been available since 2005 with five fingers and 24 degrees of freedom. However, it did not see large-scale adoption because of the difficulty of controlling such complex systems. Now OpenAI researchers have developed a system that trained control policies allowing a robot hand to perform complex in-hand manipulations. This systems shows unprecedented levels of dexterity and discovers different hand grasp types found in humans, such as the tripod, prismatic, and tip pinch grasps. It is also able to display dynamic behaviors such as finger gaiting, multi-finger coordination, the controlled use of gravity, and application of translational and torsional forces to the object. How does the OpenAI system work? First, they used a large distribution of simulations with randomized parameters to collect data for the control policy and vision-based pose estimator. The control policy receives observed robot states and rewards from the distributed simulations. It then learns to map observations to actions using RNN and reinforcement learning. The vision-based pose estimator renders scenes collected from the distributed simulations. It then learns to predict the pose of the object from images using a CNN, trained from the control policy. The object pose is predicted from 3 camera feeds with the CNN. These cameras measure the robot fingertip locations using a 3D motion capture system and give them to the control policy to produce an action for the robot. OpenAI blog You can place a block in the palm of the Shadow Dexterous hand and the Dactyl can reposition it into different orientations. For example, it can rotate the block to put a new face on top. OpenAI blog According to OpenAI, this project completes a full cycle of AI development that OpenAI has been pursuing for the past two years. “We’ve developed a new learning algorithm, scaled it massively to solve hard simulated tasks, and then applied the resulting system to the real world.” You can read more about Dactyl on OpenAI blog. You can also read the research paper for further analysis. AI beats human again – this time in a team-based strategy game OpenAI charter puts safety, standards, and transparency first Introducing Open AI’s Reptile: The latest scalable meta-learning Algorithm on the block
Read more
  • 0
  • 0
  • 16667
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-microsoft-workers-protest-the-lethal-use-of-hololens2-in-the-480m-deal-with-us-military
Sugandha Lahoti
25 Feb 2019
4 min read
Save for later

Microsoft workers protest the lethal use of Hololens2 in the $480m deal with US military

Sugandha Lahoti
25 Feb 2019
4 min read
Microsoft employees are outraged over the company’s $480 million deal with the U.S. Army to provide them with Hololens2, Microsoft’s latest augmented-reality headsets, to be used on the battlefield. Although Microsoft won the contract in November, it was last Friday, that Microsoft workers took to Twitter to express their concerns. In an open letter, addressed to Microsoft CEO Satya Nadella, and president and chief legal officer Brad Smith, employees wrote that the deal has "crossed the line" and "is designed to help people kill." https://twitter.com/MsWorkers4/status/1099066343523930112 This is not the first time tech workers have stood up in solidarity against tech giants over discrepancies in business or policies. Last year, ‘Employees of Microsoft’ asked Microsoft not to bid on US Military’s Project JEDI in an open letter. Google employees also protested against the companies’ censored search engine in China, codenamed Project Dragonfly. In October 2018, an Amazon employee has spoken out against Amazon selling its facial recognition technology, named, Rekognition to the police departments across the world. Yesterday, Microsoft unveiled the HoloLens2 AR device at the Mobile World Congress (MWC) in Barcelona. They also signed a contract with US military services called Integrated Visual Augmentation System. Per the terms of the deal, the AR headsets will be used to insert holographic images into the wearer’s field of vision. The contract’s stated objective is to “rapidly develop, test, and manufacture a single platform that Soldiers can use to Fight, Rehearse, and Train that provides increased lethality, mobility, and situational awareness necessary to achieve overmatch against our current and future adversaries," the letter said. What are Microsoft employees saying? The letter which was signed by more than 100 Microsoft employees, was published on an internal message board and circulated via email to employees at the company on Friday. The letter condemned the IVAS contract demanding for its cancellation and a call for stricter ethical guidelines. “We are alarmed that Microsoft is working to provide weapons technology to the US Military, helping one country's government ‘increase lethality’ using tools we built. We did not sign up to develop weapons, and we demand a say in how our work is used,” the letter said. Aligning Hololens2 with military turns “warfare into a simulated ‘video game,’ further distancing soldiers from the grim stakes of war and the reality of bloodshed,” adds the letter. In October, Brad Smith defended Microsoft's work with the military, via a blog post, "First, we believe that the people who defend our country need and deserve our support. And second, to withdraw from this market is to reduce our opportunity to engage in the public debate about how new technologies can best be used in a responsible way. We are not going to withdraw from the future." He also suggested that employees concerned about working on unethical projects “would be allowed to move to other work within the company”.  This statement ignores “the problem that workers are not properly informed of the use of their work”, the letter stated. Netizens are also in solidarity with Microsoft employees and criticize the military involvement. https://twitter.com/tracy_karin/status/1099880041721352192 https://twitter.com/Durrtydoesit/status/1099840664978817024 https://twitter.com/cgallagher036/status/1099826879090118657 A comment on Hacker news reads, “Whether you agree with this sentiment or not, people waking up to ethical questions in our field is unquestionably a good thing. It's important to ask these questions.” Rights groups pressure Google, Amazon, and Microsoft to stop selling facial surveillance tech to the government. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. The new tech worker movement: How did we get here? And what comes next?
Read more
  • 0
  • 0
  • 16667

article-image-twitter-experienced-major-outage-yesterday-due-to-an-internal-configuration-issue
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Twitter experienced major outage yesterday due to an internal configuration issue

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Twitter went down across major parts of the world including the US and the UK. Twitter users reported being unable to access the platform on web and mobile devices. The outage lasted on the site for approximately an hour. According to DownDetector.com, the site began experiencing major issues at 2:46pm EST, with problems being reported from users attempting to access Twitter through its website, iPhone or iPad app and via Android devices. While the majority of problems being reported from Twitter were website issues (51%), nearly 30% were from iPhone and iPad app usage and another 18% from Android users, as per the outage report. Twitter acknowledged that the platform was experiencing issues on its status page shortly after the first outages were reported online. The company listed the status as “investigating” and noted a service disruption was causing the seemingly global issue. “We are currently investigating issues people are having accessing Twitter,” the statement read. “We will keep you updated on what's happening.” This month has experienced several high-profile outages among social networks. Facebook and Instagram experienced a day-long outage affecting large parts of the world on July 3rd. LinkedIn went down for several hours on Wednesday. Cloudfare suffered two major outages in the span of two weeks this month. One was due to an internal software glitch and another was caused when Verizon accidentally rerouted IP packages after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA. Reddit was experiencing outages on its website and app earlier in the day, but appeared to be back up and running for most users an hour before Twitter went down, according to DownDetector.com. In March, Facebook and its family of apps experience a 14 hour long outage which was reasoned as server config change issue. Twitter site then began operating normally nearly an hour later at approximately 3:45pm EST. The users on Twitter joked saying they were "all censored for the last hour" when the site eventually was back up and running. On the status page of the outage report Twitter said that the outage was caused due to “an internal configuration change, which we're now fixing.” “Some people may be able to access Twitter again and we're working to make sure Twitter is available to everyone as quickly as possible,” the company said in a follow up statement. https://twitter.com/TwitterSupport/status/1149412158121267200 On Hacker News too users discussed about number of outages in major tech companies and why is this happening. One of the user comments reads, “Ok, this is too many high-profile, apparently unrelated outages in the last month to be completely a coincidence. Hypotheses: 1) software complexity is escalating over time, and logically will continue to until something makes it stop. It has now reached the point where even large companies cannot maintain high reliability. 2) internet volume is continually increasing over time, and periodically we hit a point where there are just too many pieces required to make it work (until some change the infrastructure solves that). We had such a point when dialup was no longer enough, and we solved that with fiber. Now we have a chokepoint somewhere else in the system, and it will require a different infrastructure change 3) Russia or China or Iran or somebody is f*(#ing with us, to see what they are able to break if they needed to, if they need to apply leverage to, for example, get sanctions lifted 4) Just a series of unconnected errors at big companies 5) Other possibilities?” On this comment another user adds, “I work at Facebook. I worked at Twitter. I worked at CloudFlare. The answer is nothing other than #4. #1 has the right premise but the wrong conclusion. Software complexity will continue escalating until it drops by either commoditization or redefining problems. Companies at the scale of FAANG(+T) continually accumulate tech debt in pockets and they eventually become the biggest threats to availability. Not the new shiny things. The sinusoidal pattern of exposure will continue.” Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files Facebook family of apps hits 14 hours outage, longest in its history How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others
Read more
  • 0
  • 0
  • 16657

article-image-whats-new-in-google-cloud-functions-serverless-platform
Melisha Dsouza
17 Aug 2018
5 min read
Save for later

What’s new in Google Cloud Functions serverless platform

Melisha Dsouza
17 Aug 2018
5 min read
Google Cloud Next conference in San Francisco in July 2018 saw some exciting new developments in the field of serverless technology. The company is giving development teams the ability to build apps without worrying about managing servers with their new serverless technology. Bringing the best of both worlds: Serverless and containers, Google announced that Cloud Functions is now generally available and ready for production use. Here is a list of the all-new features that developers can watch out for- #1 Write Cloud Functions using  Node 8, Python 3.7 With support for async/await and a new function signature, you can now write Cloud Functions using Node 8. Dealing with multiple asynchronous operations is now easier thanks to Cloud Functions that provide data and context. You can use the await keyword to await the results of asynchronous operations. Python 3.7 can also be used to write Cloud Functions.  Similar to Node, you get data and context for background functions, and request for HTTP. Python HTTP functions are based on the popular Flask microframework. Flask allows you to get set up really fast. The requests are based on flask.Request and the responses just need to be compatible with flask.make_response. As with Node, you get data (dict) with Python background functions and context (google.cloud.functions.Context). To signal completion, you just need to return from your function or raise an exception and Stackdriver error handling will kick in. And, similarly to Node (package.json), Cloud Functions will automatically do the installation of all of your Python dependencies (requirements.txt) and build in the cloud. You can have a look at the code differences between Node 6 and Node 8 behavior and at a Flask request on the Google Cloud website. #2 Cloud Functions is now out  for Firebase Cloud Functions for Firebase is also generally available. It has full support for Node 8, including ECMAScript 2017 and async/await. The additional granular controls include support  for runtime configuration options, including region, memory, and timeout. Thus allowing you to refine the behavior of your applications. You can find more details from the Firebase documentation. Flexibility for the application stack now stands improved. Firebase events (Analytics, Firestore, Realtime Database, Authentication) are directly available in the Cloud Functions Console on GCP. You can now trigger your functions in response to the Firebase events directly from your GCP project. #3 Run headless Chrome by accessing system libraries Google Cloud functions have also broadened the scope of libraries available by rebasing the underlying Cloud Functions operating system onto Ubuntu 18.04 LTS. Access to system libraries such as ffmpeg and libcairo2 is now available- in addition to imagemagick- as well as everything required to run headless Chrome. For example, you can now process videos and take web page screenshots in Chrome from within Cloud Functions. #4 Set environment variables You can now pass configuration to your functions by specifying key-value pairs that are bound to a function. The catch being, these pairs don’t have to exist in your source code. Environment variables are set at the deploy time using the --set-env-vars argument. These are then injected into the environment during execution time. You can find more details on the Google cloud webpage. #5 Cloud SQL direct connect Now connect Cloud Functions to Cloud SQL instances through a fully managed secure direct connection.  Explore more from the official documentation. What to expect next in Google Cloud Functions? Apart from these, Google also promises a range of features to be released in the future. These include: 1. Scaling controls This will be used to limit the number of instances on a per-function basis thus limiting traffic. Sudden traffic surge scenarios will , therefore,come under control when Cloud Functions rapidly scales up and overloads a database or general prioritization based on the importance of various parts of your system. 2. Serverless scheduling You’ll be able to schedule Cloud Functions down to one-minute intervals invoked via HTTP(S) or Pub/Sub. This allows you to execute Cloud Functions on a repeating schedule. Tasks like daily report generation or regularly processing dead letter queues will now pick up speed! 3. Compute Engine VM Access Now connect to Compute Engine VMs running on a private network using --connected-vpc option. This provides a direct connection to compute resources on an internal IP address range. 4. IAM Security Control The new Cloud Functions Invoker IAM role allows you to add IAM security to this URL. You can control who can invoke the function using the same security controls as used in Cloud Platform 5. Serverless containers With serverless containers, Google provides the same infrastructure that powers Cloud Functions. Users will now be able to simply provide a Docker image as input. This will allow them to deploy arbitrary runtimes and arbitrary system libraries on arbitrary Linux distributions This will be done while still retaining the same serverless characteristics as Cloud Functions. You can find detailed information about the updated services on Google Cloud’s Official page. Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Google Cloud Launches Blockchain Toolkit to help developers build apps easily Zeit releases Serverless Docker in beta
Read more
  • 0
  • 0
  • 16654

article-image-llvm-webassembly-backend-will-soon-become-emscriptens-default-backend-v8-announces
Bhagyashree R
02 Jul 2019
3 min read
Save for later

LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces

Bhagyashree R
02 Jul 2019
3 min read
Yesterday, the team behind V8, an open source JavaScript engine, shared the work they with the community have been doing to make LLVM WebAssembly the default backend for Emscripten. LLVM is a compiler framework and Emscripten is an LLVM-to-Web compiler. https://twitter.com/v8js/status/1145704863377981445 The LLVM WebAssembly backend will be the third backend in Emscripten. The original compiler was written in JavaScript which used to parse LLVM IR in text form. In 2013, a new backend was written called Fastcomp by forking LLVM, which was designed to emit asm.js. It was a big improvement in code quality and compile times. According to the announcement the LLVM WebAssembly backend beats the old Fastcomp backend on most metrics. Here are the advantages this backend will come with: Much faster linking The LLVM WebAssembly backend will allow incremental compilation using WebAssembly object files. Fastcomp uses LLVM Intermediate Representation (IR) in bitcode files, which means that at the time of linking the IR would be compiled by LLVM. This is why it shows slower link times. On the other hand, WebAssembly object files (.o) already contain compiled WebAssembly code, which accounts for much faster linking. Faster and smaller code The new backend shows significant code size reduction as compared to Fastcomp.  “We see similar things on real-world codebases that are not in the test suite, for example, BananaBread, a port of the Cube 2 game engine to the Web, shrinks by over 6%, and Doom 3 shrinks by 15%!,” shared the team in the announcement. The factors that account for the faster and smaller code is that LLVM has better IR optimizations and its backend codegen is smart as it can do things like global value numbering (GVN). Along with that, the team has put their efforts in tuning the Binaryen optimizer which also helps in making the code smaller and faster as compared to Fastcomp. Support for all LLVM IR While Fastcomp could handle the LLVM IR generated by clang, it often failed on other sources. On the contrary, the LLVM WebAssembly backend can handle any IR as it uses the common LLVM backend infrastructure. New WebAssembly features Fastcomp generates asm.js before running asm2wasm. This makes it difficult to handle new WebAssembly features like tail calls, exceptions, SIMD, and so on. “The WebAssembly backend is the natural place to work on those, and we are in fact working on all of the features just mentioned!,” the team added. To test the WebAssembly backend you just have to run the following commands: emsdk install latest-upstream emsdk activate latest-upstream Read more in detail on V8’s official website. V8 7.2 Beta releases with support for public class fields, well-formed JSON.stringify, and more V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more Google’s V8 7.2 and Chrome 72 gets public class fields syntax; private class fields to come soon
Read more
  • 0
  • 0
  • 16647
article-image-abi-stability-may-finally-come-in-swift-5-0
Melisha Dsouza
19 Sep 2018
3 min read
Save for later

ABI stability may finally come in Swift 5.0

Melisha Dsouza
19 Sep 2018
3 min read
Version 5 of Apple’s Swift language, used for iOS and MacOS application development, will release early next year. The main focus of this release will be an ABI (application binary interface) stability in the standard Swift library, in addition to Standard Library Improvements, Foundation Improvements and Syntactic Additions. ABI (application binary interface) features in Swift 5.0 ABI defines how to call a function, how data is represented in memory, where metadata is and how to access it. The current version of Swift is not ABI stable, so every binary (App), bundles its own version of the Swift Dynamic Library. For instance, if App1 is using Swift 3.0, it bundles Swift 3.0 Dynamic Library (containing the 3.0 ABI) inside. And if App2 is using Swift 3.2, it bundles Swift 3.2 and it’s 3.2 ABI. Here Swift doesn’t live on the iOS Operating System, it lives within each App. The ABI in Swift 5.0 will enable future compiler versions to produce binaries that conform to the stable ABI. A stable ABI tends to persist for the rest of the platform’s lifetime due to ever-increasing mutual dependencies. If Swift becomes ABI Stable, its ABI will be compatible with every version of Swift. For example, if App1 is using Swift 5.0, but App2 is using Swift 5.3, both will be consuming the Swift ABI embedded in the Operating System. The ABI feature was originally intended for Swift 4 release. Carryover goals from Swift 4 that are required for implementing the ABI in Swift 5 include: Generics features for the standard library. This includes conditional conformances for generic types and recursive protocol types and lifting restrictions on associated types in protocols API resilience, which will allow public APIs for a library to evolve A memory ownership model Besides ABI stability, expect these improvements in Swift 5: #1 String ergonomics Processing of the string type is expected to get better as users will have the ability to create raw strings, distinguish between enums that are fixed and enums that might change in the future,  and check whether one number is a multiple of another by using isMultiple(of:) #2 Groundwork for a new concurrency model Swift 5 will focus on designing language capabilities for building and using asynchronous APIs and dealing with problems created by callback-heavy code. #3 Targeted improvements to the Foundation API The Cocoa SDK- which was originally designed for Objective-C, can work seamlessly with Swift. To try out Swift 5.0 ahead of its release early next year, download the latest Swift trunk development snapshot, activate it inside your current Xcode version,  and then head over to Xcode Playground for examples you can edit. Read in depth about the new features to be implemented in Swift 5.0 at HackingwithSwift. Swift 4.2 releases with language, library and package manager updates! Apple bans Facebook’s VPN app from the App Store for violating its data collection rules iPhone XS and Apple Watch details leaked hours before product launch  
Read more
  • 0
  • 0
  • 16644

article-image-amazon-buys-eero-mesh-router-startup-adding-fuel-to-its-in-house-alexa-smart-home-ecosystem-ambitions
Melisha Dsouza
12 Feb 2019
2 min read
Save for later

Amazon buys ‘Eero’ mesh router startup, adding fuel to its in-house Alexa smart home ecosystem ambitions

Melisha Dsouza
12 Feb 2019
2 min read
Amazon has announced its plans of acquiring ‘Eero’, the startup that is focussed on mesh home routers.  Eero makes use of a mesh network to produce wireless routers and extenders that provide better coverage for home Wi-Fi networks and makes it easy to have fast and reliable Wi-Fi all over the house. Eero routers are designed to overcome coverage and dead zone issues encountered through traditional routers. Multiple access points are used to provide coverage to an entire home or apartment with a strong Wi-Fi signal. Amazon says that this deal will “help customers better connect smart home devices.” It will make it easier to set up Alexa-compatible gadgets if Amazon also controls the router technology. Amazon SVP Dave Limp said in a press release that “We are incredibly impressed with the Eero team and how quickly they invented a WiFi solution that makes connected devices just work. We have a shared vision that the smart home experience can get even easier, and we’re committed to continuing innovating on behalf of customers.” While the deal is good news for Amazon investors, many Eero users have expressed their disapproval of the deal. Amazon has faced criticism about how Alexa listens in people’s homes, and can be a threat to user privacy. Existing Eero users have voiced their concerns along the same lines: https://twitter.com/steveriggins/status/1095081742736605184 https://twitter.com/TimSchmitz/status/1095103321407397888 https://twitter.com/DerekWallace/status/1095088112554921984 Eero support has tried to put customers worry to rest with a tweet, saying, “Eero does not track customers’ internet activity and this policy will not change with the acquisition”. Eero is not the first router startup to be acquired by Amazon. Amazon has acquired startups like  Ring and Blink, in recent years, with a vision to launch its own in-house Alexa smart home ecosystem. Details of the deal have yet to be disclosed. Head over to Techcrunch for more insights on this news. “Amazon wants to make all the rules and weaken democracy in NYC”: Brad Lander on Amazon’s HQ2 deal Aurora, a self-driving startup, secures $530 million in funding from Amazon, Sequoia, and T. Rowe Price among others Amazon faces increasing public pressure as HQ2 plans go under the scanner in New York  
Read more
  • 0
  • 0
  • 16633

article-image-kubernetes-1-10-released
Vijin Boricha
09 Apr 2018
2 min read
Save for later

Kubernetes 1.10 released

Vijin Boricha
09 Apr 2018
2 min read
Kubernetes has announced their first release of 2018: Kubernetes 1.10. This release majorly focuses on stabilizing 3 key areas which include storage, security, and networking. Kubernetes is an open-source system, initially designed by Google and at present is maintained by the Cloud Native Computing Foundation, which helps in automating deployment, scaling, and management of containerized applications. Storage - CSI and Local Storage move to beta: In this version, you will find: The Container Storage Interface (CSI) moves to beta. One can install new volume plugins similar to deploying a pod. This helps third-party storage providers to develop independent solutions outside the core Kubernetes codebase. Local storage management has also progressed to beta, enabling locally attached storage available as a persistent volume source. This assures lower-cost and higher performance for distributed file systems and databases. Security - External credential providers (alpha): Complementing the Cloud Controller Manager feature added in 1.9 Kubernetes has extended its feature with the addition of External credential providers in 1.10. This enables Cloud providers and other platform developers to release binary plugins to handle authentication for specific cloud-provider Identity Access Management services. Networking - CoreDNS as a DNS provider (beta): Kubernetes now provides the ability to switch the DNS service to CoreDNS during installation. CoreDNS is a single process that can now supports more use cases. To get a complete list of additional features of this release visit the Changelog. Check out other related posts: The key differences between Kubernetes and Docker Swarm Apache Spark 2.3 now has native Kubernetes support! OpenShift 3.9 released ahead of planned schedule
Read more
  • 0
  • 0
  • 16631
article-image-openai-introduces-neural-mmo-a-multiagent-game-environment-for-reinforcement-learning-agents
Amrata Joshi
06 Mar 2019
3 min read
Save for later

OpenAI introduces Neural MMO, a multiagent game environment for reinforcement learning agents

Amrata Joshi
06 Mar 2019
3 min read
On Monday, the team at OpenAI launched at Neural MMO (Massively Multiplayer Online Games), a multiagent game environment for reinforcement learning agents. It will be used for training AI in complex, open-world environments. This platform supports a large number of agents within a persistent and open-ended task. The need for Neural MMO Since the past few years, the suitability of MMOs for modeling real-life events has been explored. But there are two main challenges for multiagent reinforcement learning. Firstly, there is a need to create open-ended tasks with high complexity ceiling as the current environments are complex and narrow. The other challenge, the OpenAI team specifies is the need for more benchmark environments in order to quantify learning progress in the presence of large population scales. Different criteria to overcome challenges The team suggests certain criteria which need to be met by the environment to overcome the challenges. Persistence Agents can concurrently learn in the presence of other learning agents without the need of environment resets. The strategies should adapt to rapid changes in the behaviors of other agents and also consider long time horizons. Scale Neural MMO supports a large and variable number of entities. The experiments by the OpenAI team consider up to 100M lifetimes of 128 concurrent agents in each of 100 concurrent servers. Efficiency As the computational barrier to entry is low, effective policies can be trained on a single desktop CPU. Expansion The Neural MMO is designed to update new content. The core features include food and water foraging system, procedural generation of tile-based terrain, and a strategic combat system. There are opportunities for open-source driven expansion in the future. The Environment Players can join any available server while each containing an automatically generated tile-based game map of configurable size. Some tiles are traversable, such as food-bearing forest tiles and grass tiles, while others, such as water and solid stone, are not. Players are required to obtain food and water and avoid combat damage from other agents, in order to sustain their health. The platform comes with a procedural environment generator and visualization tools for map tile visitation distribution, value functions, and agent-agent dependencies of learned policies. The team has trained a fully connected architecture using vanilla policy gradients, with a value function baseline and reward discounting as the only enhancements. The team has converted variable length observations, such as the list of surrounding players, into a single length vector by computing the maximum across all players. Neural MMO has resolved a couple of limitations of previous game-based environments, but there are still many left unsolved. Few users are excited about this news. One of the users commented on HackerNews, “What I find interesting about this is that the agents naturally become pacifists.” While a few others think that the company should come up with novel ideas and not copied ones. Another user commented on HackerNews, “So far, they are replicating known results from evolutionary game theory (pacifism & niches) to economics (distance & diversification). I wonder when and if they will surprise some novel results.” To know more about this news, check out OpenAI’s official blog post. AI Village shares its perspective on OpenAI’s decision to release a limited version of GPT-2 OpenAI team publishes a paper arguing that long term AI safety research needs social scientists OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words
Read more
  • 0
  • 0
  • 16630

article-image-htc-intel-lenovo-showcase-their-products-at-day-2-of-ces-2019
Sugandha Lahoti
08 Jan 2019
4 min read
Save for later

HTC, Intel, Lenovo showcase their products at Day 2 of CES 2019

Sugandha Lahoti
08 Jan 2019
4 min read
CES 2019 is kicking off in Las Vegas, Nevada today, January 8, Monday for 3 days. The conference unofficially kicked off on Sunday, January 6 and you may have a look at the announcements made on that day. Yesterday was the main press day when the majority of the announcements were made with a lot of companies showcasing their latest projects and announcing new products, software, and services. HTC HTC announced their partnership with Mozilla, bringing Firefox’s virtual reality web browser to the Vive headset. Mozilla first announced Firefox Reality as a dedicated VR web browser in April. In September, they announced that the browser is now available on Viveport, Oculus, and Daydream. Now, it is available for the HTC Vive headset. As part of the deal, HTC is also teaming up with Amazon to make use of Amazon Sumerian. HTC also announced the Vive Pro Eye virtual reality headset with native, built-in eye tracking. It uses “foveated rendering” to render sharp images for wherever the human eye is looking in a virtual scene and reduces the image quality of objects on the periphery. Intel Intel made a number of announcements at CES 2019. They showcased new processors and also released a press release with updates on Project Athena. With this project, they are getting PC makers ready for “a new class of advanced laptops.” These laptops will be Ultrabooks part two with 5G and artificial intelligence support. New Intel processors: New 9th Gen Core processors for a limited number of desktops and laptops. A 10nm Ice Lake processor for thin laptops. A 10nm Lakefield processor using 3D stacking technology for very small computers and tablets. A 10nm Cascade Lake Xeon processor for data processing. 3D Athlete Tracking tech which runs on the Cascade Lake chip and shows data about how fast and far athletes are traveling. Intel’s 10nm Snow Ridge SOC for 5G base stations. Lenovo Lenovo has made minor updates to their ThinkPad X1 Carbon and X1 Yoga laptops with new designs for 2019. They are mostly going to have a material change, and are also going to be thinner and lighter this year. Lenovo has also released two Lenovo is two large display monitor. The first is Lenovo’s ThinkVision P44W, which is aimed at business users, and the second is the Legion Y44w Gaming Monitor. They both have a 43.4-inch panel. Uber One of Uber’s partners in the air taxi domain, Bell, has revealed the design of its vertical takeoff and landing air taxi at CES 2019. Their flying taxi, dubbed the Bell Nexus, can accommodate up to 5 people and is a hybrid-electric powered vehicle. CES 2019 also saw the release of the game Marvel's Avengers: Rocket's Rescue Run. This is the first demo product from startup Holoride, which has Audi as one of its stakeholders. It's the result of Audi and Disney's new media format, which aims to bring virtual reality to passengers in cars, specifically to Uber. More announcements: Harley-Davidson gave a preview of their first all-electric motorcycle. It will launch in August 2019 and will cost $29,799 TCL announced its first soundbars and a 75-inch version of the excellent 6-Series 4K Roku TV Elgato announced a professional $199 light rig for Twitch streamers and YouTube creators Hisense announces its new 2019 4K TV lineup and the Sonic One TV Griffin introduces new wireless chargers for the iPhone and Apple Watch Amazon is planning to let people deliver packages inside your garage Kodak’s release a new instant camera and printer line GE announced a 27-inch smart display for the kitchen that streams Netflix. Google Assistant will soon be on a billion devices. Their next stop - feature phones Vizio announces the most advanced 4K TV ever and support for Apple’s AirPlay 2 Toyota shared details of it’s Guardian Driver-Assist System which will mimic a technique used in fighter jets to serve as a smart intermediary between driver and car. CES 2019: Top announcements made so far HTC Vive Focus 2.0 update promises long battery life, among other things for the VR headset Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’
Read more
  • 0
  • 0
  • 16622
Modal Close icon
Modal Close icon