Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-amazon-workers-protest-on-its-prime-day-demand-a-safe-work-environment-and-fair-wages
Fatema Patrawala
16 Jul 2019
6 min read
Save for later

Amazon workers protest on its Prime day, demand a safe work environment and fair wages

Fatema Patrawala
16 Jul 2019
6 min read
While people all over the globe will be splurging at the Amazon’s most awaited Prime Day sale, the employees of this e-commerce site are protesting at multiple sites across the globe demanding better working conditions among others. Workers at the Amazon warehouse in Shakopee, Minnesota, went on a six-hour work stoppage on its Prime day sale. As per reports from the BBC news, 2000 Amazon workers in Germany were on strike yesterday. And in the UK week long protests are planned. Amazon started offering this sale five years ago, to its Prime customers who pay subscription fees in exchange of deep discounts on a range of products available on the site, with free shipping, next day delivery and other perks involved. Bloomberg reports that, Willaim Stolz, one of the employees at Minnesota organizing the strike said, “Amazon is going to be telling one story about itself, which is they can ship a Kindle to your house in one day, isn’t that wonderful! But we want to take the opportunity to talk about what it takes to make that work happen and put pressure on Amazon to protect us and provide safe, reliable jobs.” He says he has to pick an item about every eight seconds, or 332 per hour, for a 10 hour day. "The speeds that we have to work are very physically and mentally exhausting, in some cases leading to injuries," he said. "Basically we just want them to treat us with respect as human beings and not treat us like machines," he said. Further Bloomberg also reported that Minnesota warehouse had become central to Amazon worker activism. There were talks between the employees and management to reduce workloads during Ramadan and designate a conference room as a prayer space. But according to the workers Amazon has failed to meet these demands and the company terminates employees who do not meet the productivity quotas. In a letter last year to the National Labor Relations Board reported by The Verge, an attorney for Amazon said that hundreds of employees at one Baltimore facility were terminated within about a year for failing to meet productivity rates. In May, the Washington Post had published a detailed report on how Amazon had gamified and made the productivity goals dynamic for its workers. Gamification generally refers to software programs that simulate video games by offering rewards, badges or bragging rights among the workers. The Amazon workers at its warehouses need to complete various tasks in order to earn these reward points. While the protest was planned by Amazon warehouse workers, in an effort to show solidarity a few of the white collars Amazon engineers flew to Minnesota to join the protest. They are demanding the company take action against climate change, ease quotas, and make more temp employees permanent. https://twitter.com/histoftech/status/1148348541678604288 ”’We’re both fighting for a livable future,’ said a Seattle software engineer. “It’s the latest example of tech employees with very different jobs trying to forge common cause in the hopes their bosses find their demands harder to ignore.” In May, Amazon shareholders rejected 11 resolutions put forward by the employees which included Amazon’s controversial facial recognition technology, demands for more action on climate change, salary transparency, and other equity issues. Tyler Hamilton, who works at the Shakopee warehouse, said he hoped that consumers would remember that there were people behind the packages that show up at their doors, often less than 48 hours after placing an order. "We are the faces behind the boxes," Hamilton said. "The little smiley face that comes with every package, not everyone in there smiles all the time. It can be rough sometimes. And, you should think about that when you order it." In Germany, Amazon employs 20,000 people. Labour union Verdi said more than 2,000 workers at seven sites had gone on strike under the logo "no more discount on our incomes". "While Amazon fuels bargain hunting on Prime Day with hefty discounts, employees are being deprived of a living wage," said Orhan Akman, retail specialist at Verdi. In the UK, GMB union officials handed leaflets to workers arriving at the site in Peterborough in the East Midlands, and in the coming days protests are expected at other sites such as Swansea and Rugeley, in the West Midlands. Mick Rix, GMB national officer, said "Amazon workers want Jeff Bezos to know they are people not robots. It's prime time for Amazon to get round the table with GMB and discuss ways to make workplaces safer and to give their workers and independence voice". https://twitter.com/GMB_union/status/1148658030487232515 Amazon in response to this said that it "provided great employment opportunities with excellent pay". The company has encouraged people to compare its operations in Shakopee with other employers in the area. In the UK, where it employs 29,500 people, a spokesperson said the company offered industry-leading pay starting at £9.50 per hour and was the "employer of choice for thousands of people across the UK". It said its German operations offered wages "at the upper end of what is paid in comparable jobs" and it was "seeing very limited participation across Germany with zero operational impact and therefore no impact on customer deliveries". The planned strike has caught the attention of politicians. Democratic presidential candidates Elizabeth Warren and Bernie Sanders both offered public support on social media for the strike. "I fully support Amazon workers' Prime Day strike," Warren said in a tweet. "Their fight for safe and reliable jobs is another reminder that we must come together to hold big corporations accountable." https://twitter.com/ewarren/status/1150760629583712257 "I stand in solidarity with the courageous Amazon workers engaging in a work stoppage against unconscionable working conditions in their warehouses," Sanders tweeted. "It is not too much to ask that a company owned by the wealthiest person in the world treat its workers with dignity and respect." Amazon S3 is retiring support for path-style API requests; sparks censorship fears Amazon employees get support from Glass Lewis and ISS on its resolution for Amazon to manage climate change risks Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting
Read more
  • 0
  • 0
  • 13706

article-image-microsoft-announces-official-support-for-windows-10-to-build-64-bit-arm-apps
Prasad Ramesh
19 Nov 2018
2 min read
Save for later

Microsoft announces official support for Windows 10 to build 64-bit ARM apps

Prasad Ramesh
19 Nov 2018
2 min read
Last week Microsoft announced that developers using Visual Studio now have access to officially supported SDK and tools for creating 64-bit ARM (ARM64) apps. The Microsoft Store is now also accepting submissions for apps built for the ARM64 architecture. Lenovo and Samsung are coming up with new Windows 10 ARM devices featuring the Qualcomm Snapdragon 850 chip. An x86 emulation layer lets these devices run Windows applications. Developers can use Visual Studio 15.9 to recompile apps both on UWP and C++ Win32. These apps can run natively on ARM devices running Windows 10. Running natively allows the applications to take complete advantage of the processing power and capabilities of Windows 10. This results in the best possible experience for users. Instructions to enable Windows 10 64-bit ARM apps support You need to update your Visual Studio to version 15.9. Ensure that you have installed the individual component “Visual C++ compilers and libraries for ARM64” if you plan to build ARM64 C++ Win32 apps. ARM64 will be seen as an available build configuration after updating for new UWP projects. For existing projects and C++ Win32 projects, an ARM configuration needs to be added to the project. This can be done via the Configuration properties in Configuration Manager. Add a new Active solution platform and name it ARM64. Then copy the settings from ARM or x64 and check the box to Create new project platforms. Hitting build should ready the ARM binaries. You can use remote debugging to debug your app. This is fully supported on ARM64. You can alternatively create a package for sideloading or directly copy binaries to run the app. The Windows Store is now accepting ARM64 UWP apps, both on C++ and .NET Native. You can also use the Desktop Bridge to wrap ARM64 binaries into a package to submit to the Windows Store. You can also host dedicated ARM64 versions of Win32 apps on your own website or integrate ARM64 into existing multi-architecture installers. For more instructions, visit the Windows Blog. Another bug in Windows 10 October update that can cause data loss Microsoft announces .NET standard 2.1 Microsoft bring an open-source model of Component Firmware Update (CFU) for peripheral developers
Read more
  • 0
  • 0
  • 13699

article-image-uber-releases-ludwig-an-open-source-ai-toolkit-that-simplifies-training-deep-learning-models-for-non-experts
Natasha Mathur
12 Feb 2019
3 min read
Save for later

Uber releases Ludwig, an open source AI toolkit that simplifies training deep learning models for non-experts

Natasha Mathur
12 Feb 2019
3 min read
Uber released a new, open source Deep Learning toolbox called Ludwig, yesterday, to make training and testing of the deep learning models easier for non-experts. “By using Ludwig, experts and researchers can simplify the prototyping process and streamline data processing so that they can focus on developing deep learning architectures rather than data wrangling”, states the Uber team. Uber had been working on developing Ludwig for the past two years to simplify the use of Deep Learning models in projects. Uber has used the toolkit for several of its own projects such as its Customer Obsession Ticket Assistant (COTA), information extraction from driver licenses, food delivery time prediction, etc. Ludwig comes with a set of model architectures that can be combined to develop an end-to-end model for a given use case. Main highlights of Ludwig No need to write code: With Ludwig, you don’t need any coding skills in order to train a model and use it for obtaining predictions. Generality: Ludwig makes use of a new data type-based approach for the deep learning model design making the tool available for a variety of use cases. Flexibility: Ludwig offers extensive control to its users over model building and training, making it very user-friendly, especially for the beginners. Extensibility: Easy to add new model architecture and new feature data types. Understandability: There are standard visualizations offered in Ludwig to helps users understand the performance of their deep learning models and compare their predictions. Apart from being flexible and accessible, Ludwig comes with additional benefits for non-programmers including a set of command line utilities for training, testing models, and obtaining predictions. It also offers a programmatic API, allowing users to train and use a model with only a few lines of code. Moreover, Ludwig comprises other tools that help with evaluating models, comparing the performance and predictions of these models via visualizations as well as extracting model weights and activations from them. To help its users train a deep learning model, Ludwig provides a tabular file (like CSV) that contains the data and a YAML (YAML Ain't Markup Language) configuration file (specifies columns of the tabular file as input features and output target variables). The simplicity of this configuration file helps with faster prototyping and considerably brings down the hours of coding to just a few minutes. Users can also visualize their training results in Ludwig. A result directory consisting of the trained model with its hyperparameters, as well as summary statistics of the training process, are created in Ludwig. Users can further visualize these results with the help of several visualization options from the visualization tool. “We decided to open source Ludwig because we believe that it can be a useful tool for non-expert machine learning practitioners and experienced deep learning developers and researchers alike”, states the Uber team. For more information, check out the official Ludwig blog post. Uber releases AresDB, a new GPU-powered real-time Analytics Engine Uber to restart its autonomous vehicle testing, nine months after the fatal Arizona accident Uber manager warned the leadership team of the inadequacy of safety procedures in their prototype robo-taxis early March, reports The Information
Read more
  • 0
  • 0
  • 13699

article-image-facebook-ban-white-nationalism-separatism-white-supremacy-content
Fatema Patrawala
28 Mar 2019
6 min read
Save for later

Facebook will ban white nationalism, and separatism content in addition to white supremacy content

Fatema Patrawala
28 Mar 2019
6 min read
Yesterday Facebook rolled out a policy to ban white nationalist content from its platforms. This seems to be a significant step towards the longstanding demands from civil rights groups who said the tech giant was failing to confront the powerful reach of white extremism on social media. The threat posed by social media enabling white nationalism was violently underlined this month when a racist gunman killed 50 people at two mosques in New Zealand, using Facebook and other social media platforms to post live video of the attack. Facebook removed the video and the gunman’s account soon after but the footage was already widely shared on Facebook, YouTube, Twitter, Reddit and 8chan website. In a blog post titled “Standing Against Hate,” that Facebook posted on Wednesday, the company said the ban takes effect next week. As of midday Wednesday, the feature did not yet appear to be live, based on searches for terms like “white nationalist,” “white nationalist groups,” and “blood and soil.” As part of its policy change, Facebook said it would divert users who searched for white supremacist content to Life After Hate, a nonprofit that helps people leave hate groups, and would improve its ability to use artificial intelligence and machine learning to combat white nationalism. Based on information in Motherboard’s report, the platform will use content-matching to delete images previously flagged as hate speech. There was no further elaboration on how that would work, including whether or not URLs to websites like 4chan and 8chan would be affected by the ban. Facebook will not differentiate between white nationalism, white separatism and white supremacy The company had previously banned white supremacist content from its platforms but maintained a murky distinction between white supremacy, white nationalism and white separatism. On Wednesday, it said that its views had been changed by civil society groups and experts in race relations and that it now believed “white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups.” Kristen Clarke, the president of the Lawyers’ Committee for Civil Rights Under Law, which helped Facebook shape its new attitude toward white nationalism, said the earlier policy “left a gaping hole in terms of what it provided for white supremacists to fully pursue their platform.” “Online hate must be confronted if we are going to make meaningful progress in the fight against hate, so this is a really significant victory,” Ms. Clarke said. “It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services,” Facebook said in a statement posted online. It later added, “Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism.” “Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion — and that has always included white supremacy,” the company said in a statement. “We didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism — things like American pride and Basque separatism, which are an important part of people’s identity.” The civil rights groups welcome this ban but wait for implementation before approving Facebook’s move Facebook’s decision was praised by civil rights groups and experts in the study of extremism, many of whom had strongly disapproved of the company’s previous understanding of white nationalism. Madihha Ahussain, a lawyer for Muslim Advocates, a civil-rights group, said the policy change was “a welcome development” in the wake of the New Zealand mosque shootings. But she said the company still had to explain how it will enforce the policy, including how it will determine what constitutes white nationalist content. “We need to know how Facebook will define white nationalist and white separatist content,” she said. “For example, will it include expressions of anti-Muslim, anti-Black, anti-Jewish, anti-immigrant and anti-LGBTQ sentiment — all underlying foundations of white nationalism? Further, if the policy lacks robust, informed and assertive enforcement, it will continue to leave vulnerable communities at the mercy of hate groups.” Mark Pitcavage, who tracks domestic extremism for the Anti-Defamation League, said the shift from Facebook was “a good thing if they were using such a narrow definition before.” Mr. Pitcavage said the term white nationalism “had always been used as a euphemism for white supremacy, and today it is still used as a euphemism for white supremacy.” He called the two terms “identically extreme.” He said white supremacists began using the term “white nationalist” after the civil rights movement of the 1960s, when the term “white supremacy” began to receive sustained scorn from mainstream society, including among white people. “The less hard-core white supremacists stopped using any term for themselves, but the more hard-core white supremacists started using ‘white nationalism’ as a euphemism for ‘white supremacy,’” he said. And he said comparisons between white nationalism and American patriotism or ethnic pride were misplaced. “Whiteness is not an ethnicity, it is a skin color,” Mr. Pitcavage said. “And America is a multicultural society. White nationalism is simply a form of white supremacy. It is an ideology centered on hate.” Progressive nonprofit civil rights advocacy group, Color of Change called Facebook’s new moderation policy a critical step forward. “Color Of Change alerted Facebook years ago to the growing dangers of white nationalists on its platform, and today, we are glad to see the company’s leadership take this critical step forward in updating its policy on white nationalism,” the statement reads. “We look forward to continuing our work with Facebook to ensure that the platform’s content moderation guidelines and training properly support the updated policy and are informed by civil rights and racial justice organizations.” How social media enabled and amplified the Christchurch terrorist attack Google and Facebook working hard to clean image after the media backlash from the Christchurch terrorist attack Facebook under criminal investigations for data sharing deals: NYT report
Read more
  • 0
  • 0
  • 13698

article-image-keras-2-2-0-releases
Sunith Shetty
08 Jun 2018
3 min read
Save for later

Keras 2.2.0 releases!

Sunith Shetty
08 Jun 2018
3 min read
Keras team has announced a new version 2.2.0 with notable features to allow developers to perform deep learning with ease. This release has brought new API changes, new input modes, bug fixes and performance improvements to the high-level neural network API. Keras is a popular neural network API which is capable of running on top of TensorFlow, CNTK or Theano. This Python API is developed with a focus on bringing fast experimentation results, thus taking least possible delay while doing research. It is a highly efficient library allowing easy and fast prototyping, and can even run seamlessly on CPU and GPU. Some of the noteworthy changes available in Keras 2.2.0: New areas of improvements A new API called Model subclassing is added for model definition. They have added a new input mode which provides the ability to call models on TensorFlow tensors directly (however this is applicable to TensorFlow backend only). More improved feature coverage of Keras with the CNTK and Theano backends. Lots of bug fixes and performance improvements are done to the Keras API Now, Keras engine will follow a much more modular structure, thus improving code structure, code health, and reduced test time. Keras modules applications and preprocessing are now externalized to their own repositories such as keras-applications and keras-preprocessing respectively. New API changes MobileNetV2 application added which is available for all backends. Enabled CNTK and Theano support for applications Xception and MobileNet. They have also extended their support for layers SeparableConv1D, SeparableConv2D, as well as the backend methods separable_conv1d and separable_conv2d. which was previously only available for TensorFlow. Now you can feed symbolic tensors to models, with TensorFlow backend. Support for input masking in the TimeDistributed layer. ReLU activation is made easier to configure while retaining easy serialization capabilities by adding an advanced_activation layer ReLU. In order to have a complete list of new API changes, you can visit Github. Breaking changes They have removed the legacy Merge layers and their related functionalities which were the remains of Keras 0. These layers were deprecated in May 2016, with full eviction schedules for August 2017. From now on models from the Keras 0 API using these layers will not be loaded with Keras 2.2.0 and above. The base initializer called truncated_normal now return values that are scaled by ~0.9 thus providing the correct variance value after truncation. For the full list of updates, you can refer the release notes. Read more Why you should use Keras for deep learning Implementing Deep Learning with Keras 2 ways to customize your deep learning models with Keras How to build Deep convolutional GAN using TensorFlow and Keras
Read more
  • 0
  • 0
  • 13696

article-image-paper-in-two-minutes-zero-shot-learning-for-visual-imitation
Savia Lobo
02 May 2018
4 min read
Save for later

Paper in Two minutes: Zero-Shot learning for Visual Imitation

Savia Lobo
02 May 2018
4 min read
The ICLR paper, ‘Zero-Shot learning for Visual Imitation’ is a collaborative effort by Deepak Pathak, Parsa Mahmoudieh, Michael Luo, Pulkit Agrawal, Dian Chen, Fred Shentu, Evan Shelhamer, Jitendra Malik, Alexei A. Efros, and Trevor Darrell. In this article, we will come across one of the main problems with imitation learning, the expense of expert demonstration. The authors here propose a method for sidestepping this issue by using the random exploration of an agent to learn generalizable skills which can then be applied without any specific pretraining on any new task. Reducing the expert demonstration expense with Zero-shot visual imitation What problem is the paper trying to solve? In order to carry out imitation, the expert should be able to simply demonstrate tasks capably without lots of effort, instrumentation, or engineering. Collecting too many demonstrations is time-consuming, exact state-action knowledge is impractical, and reward design is involved and takes more than task expertise. The agent should be able to achieve goals based on the demonstrations without having to devote time learning to do each and every task. To address these issues, the authors recast learning from demonstration into doing from demonstration by (1) Only giving demonstrations during inference and, (2) Restricting demonstrations to visual observations alone rather than full state-actions. Instead of imitation learning, the agent must learn to imitate. This is the goal that the authors are trying to achieve. Paper summary This paper explains how existing approaches to imitation learning distill both what to do (goal) and how to do it (skills), from expert demonstrations. However, this expertise is effective but expensive supervision: it is not always practical to collect many detailed demonstrations. The authors here suggest that if an agent has access to its environment along with the expert, it can learn skills from its own experience and rely on expertise for the goals alone. And so, they have proposed a ‘Zero-shot’ method which does not include any expert actions or demonstrations during learning. The zero-shot imitator has no prior knowledge of the environment and makes no use of the expert during training. It learns from experience to follow experts, for instance, the authors conducted certain experiments such as, navigating an office with a turtlebot, and manipulating rope with a baxter robot. Key takeaways The authors have proposed a method for learning a parametric skill function (PSF) that takes as input a description of the initial state, goal state, parameters of the skill and outputs a sequence of actions (could be of varying length), which take the agent from initial state to goal state. The authors have shown real-world results for office navigation and rope manipulation but make no domain assumptions limiting the method to these problems. Zero-shot imitators learn to follow demonstrations without any expert supervision during learning. This approach learns task priors of representation, goals, and skills from the environment in order to imitate the goals given by the expert during inference. Reviewer comments summary Overall Score: 25/30 Average Score: 8 As per one of the reviewers, the proposed approach is well founded and the experimental evaluations are promising. The paper is well written and easy to follow. The skill function uses a RNN as function approximator and minimizes the sum of two losses i.e. the state mismatch loss over the trajectory (using an explicitly learnt forward model) and the action mismatch loss (using a model-free action prediction module) . This is hard to do in practice due to jointly learning both the forward model as well as the state mismatches. So first they are separately learnt and then fine-tuned together. One Shot Learning: Solution to your low data problem Using Meta-Learning in Nonstationary and Competitive Environments with Pieter Abbeel What is Meta Learning?
Read more
  • 0
  • 0
  • 13690
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-chromeos-is-ready-for-web-development-a-talk-by-dan-dascalescu-at-the-chrome-web-summit-2018
Sugandha Lahoti
15 Nov 2018
3 min read
Save for later

“ChromeOS is ready for web development” - A talk by Dan Dascalescu at the Chrome Web Summit 2018

Sugandha Lahoti
15 Nov 2018
3 min read
At the Chrome Web Summit 2018, Dan Dascalescu, Partner Developer Advocate at Google provided a high-level overview of ChromeOS and discussed Chrome’s core and new features available to web developers. Topics included best practices for web development, including Progressive Web Apps, and optimizing input and touch for tablets while having desktop users in mind. He specified that Chromebooks are convergence machines that run Linux, Android, and Google Play natively without emulation. He explained why ChromeOS can be a good choice for web developers. It not only powers devices from sticks to tablets to desktops, but it can also run web, Android, and now Linux applications. ChromeOS brings together your own development workflow with a variety of form factors from mobiles, tablets, desktop, and browsers on Android and Linux. Run Linux apps on ChromeOS with Crostini Stephen Barber, an engineer on ChromeOS described Chrome’s container architecture which is based on Chrome’s principle of safety, security, and reliability.  By using lightweight containers and hardware virtualization support, Android and Linux code run natively in ChromeOS. Developers can run Linux apps on ChromeOS through Project Crostini. Crostini is based on Debian stable and uses both virtualization and containers to provide security in depth. For now, they are starting out targeting web developers by providing integration features like port forwarding to localhost as a secure origin. They also provide a penguin.linux.test DNS alias, to treat a container like a separate system. For supporting more developer workflows than just web, they are soon providing USB, GPU, audio, FUSE, and file sharing support in upcoming releases. Dan also shared how Crostini is actually used for developing web apps. He demonstrated how you can easily install Linux on your Chromebook. Although Crostini is still in development, most things work as expected. Developers can run IDEs, databases like MongoDB, or MySQL. Anything can be installed with an -apt. It also has a terminal. Dan also mentioned Carlo, which is a Google project that is essentially a helpful node app framework. It provides applications with Chrome rendering capabilities. It uses a locally detected instance of chrome and it connects to your process pipe and then exposes the high-level API to render in Chrome from your NodeScript. If you don’t need low-level features, you can make your app as a PWA which works without a LaunchBar once installed in ChromeOS. Windows Chrome desktop PWA support will be available from Chrome 70+ and Mac from Chrome 72+. Dan also conducted a demo on how to run a PWA. These were the steps: Set up Crostini Install the development environment (node, npm, VSCode) Checkout a PWA (Squoosh) from GitHub Open in VSCode Run the web server Open PWA from Linux and Android browsers He also provided guidance on optimizing forms, handling touch interactions, pointer events, and how to set up remote debugging. What does the future look like for ChromeOS? Chrome team is on improving the desktop PWA support. This includes support for keyboard shortcuts, badging for the launch icon, and link capturing. They are also working on low-latency canvas contexts which are introduced in Chrome 71 Beta. This context uses OpenGLES for rastering, writes directly to the Front Buffer, which bypasses several steps of the rendering process but risks tearing. It is used mainly for high-level interactive apps. View the full talk on YouTube. Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native. Meet Carlo, a web rendering surface for Node applications by the Google Chrome team. Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications.
Read more
  • 0
  • 0
  • 13690

article-image-istio-available-in-beta-for-google-kubernetes-engine-will-accelerate-app-delivery-and-improve-microservice-management
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management

Melisha Dsouza
12 Dec 2018
3 min read
The KubeCon+CloudNativeCon happening at Seattle this week has excited developers with its plethora of new announcements and releases. This conference dedicated to Kubernetes and other cloud native technologies, brings together adopters and technologists from leading open source and cloud native communities to discuss new advancements at the cloud front. At this year’s conference, Google Cloud announced the beta availability of ‘Istio’ for its Google Kubernetes Engine. Istio was launched in the middle of 2017, as a result of a collaboration between Google, IBM and Lyft. According to Google, this open-source “service mesh” that is used to connect, manage and secure microservices on a variety of platforms- like Kubernetes- will play a vital role in helping developers make the most of their microservices. Yesterday, Google Cloud Director of Engineering Chen Goldberg and Director of Product Management Jennifer Lin said in a blog post that the availability of Istio on Google Kubernetes Engine will provide “more granular visibility, security and resilience for Kubernetes-based apps”. This service will be made available through Google’s Cloud Services Platform that bundles together all the tools and services needed by developers to get their container apps up and running on the company’s cloud or in on-premises data centres. In an interview with SiliconANGLE, Holger Mueller, principal analyst and vice president of Constellation Research Inc., compared software containers to “cars.” He says that  “Kubernetes has built the road but the cars have no idea where they are, how fast they are driving, how they interact with each other or what their final destination is. Enter Istio and enterprises get all of the above. Istio is a logical step for Google and a sign that the next level of deployments is about manageability, visibility and awareness of what enterprises are running.” Additional features of Istio Istio allows developers and operators to manage applications as services and not as lots of different infrastructure components. Istio allows users to encrypt all their network traffic, layering transparently onto any existing distributed application. Users need not embed any client libraries in their code to avail this functionality. Istio on GKE also comes with an integration into Stackdriver, Google Cloud’s monitoring and logging service. Istio securely authenticates and connects a developer’s services to one another. It transparently adds mTLS to a service communication, thus encrypting all information in transit. It provides a service identity for each service, allowing developers to create service-level policies enforced for each individual application transaction, while providing non-replayable identity protection. Istio is yet another step for GKE that will make it easier to secure and efficiently manage containerized applications. Head over to TechCrunch for more insights on this news. Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud What’s new in Google Cloud Functions serverless platform Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads
Read more
  • 0
  • 0
  • 13689

article-image-kotlin-1-3-rc1-is-here-with-compiler-and-ide-improvements
Natasha Mathur
21 Sep 2018
2 min read
Save for later

Kotlin 1.3 RC1 is here with compiler and IDE improvements

Natasha Mathur
21 Sep 2018
2 min read
The Kotlin team has come out with a release candidate 1.3 of the Kotlin Language. Kotlin 1.3 RC1 comes with improvements and changes to its compiler and the IDE, IntelliJ IDEA. Let’s discuss key updates in Kotlin 1.3 RC1. Compiler Changes Improvements Support has been added for main entry point without arguments in the frontend, IDE and JVM in Kotlin 1.3 RC1. Other than that, there is added support for suspend fun main function in JVM. The boxing technique has been changed. Now, instead of calling valueOf, a new wrapper type will be allocated. Bug Fixes The invoke function that kept getting called with lambda parameter on a field named suspend has been fixed. With Kotlin 1.3 RC1, correct WhenMappings code is generated in case of mixed enum classes in when conditions. The use of  KSuspendFunctionN and SuspendFunctionN as supertypes has been forbidden. Also, the suspend functions are annotated with @kotlin.test.Test have been forbidden. Use of kotlin.Result as a return type and with special operators has been prohibited. The constructors containing inline classes as parameter types will be now generated as private with synthetic accessors. An inline class that was missing unboxing when using indexer into an ArrayList has been fixed. IDE Changes Support has been added for type parameters in where clause (multiple type constraints). Bug Fixes The issue where @Language prefix and suffix were getting ignored for function arguments has been fixed. Coroutine migrator has been renamed to buildSequence/buildIterator to their new names. Deadlock in databinding with AndroidX which led to Android Studio hanging has been fixed. The issue of Android module in a multiplatform project not being recognized earlier as a multiplatform module has been fixed. Multiplatform projects without Android target were not being imported properly into Android Studio, this has been fixed with Kotlin 1.3 RC1. IDEA used to hang when Kotlin bytecode tool window remained open while editing a class with a secondary constructor. This is fixed now. IDE Multi-Platform: Old multi-platform modules templates have been removed from New Project/New Module wizard. ConcurrentModificationException, an actual type alias has been introduced in the JVM library. There are more changes and improvements in Kotlin 1.3RC1. Check out Kotlin 1.3RC official release notes for the complete list. Building RESTful web services with Kotlin Kotlin 1.3 M1 arrives with coroutines and new experimental features like unsigned integer types IntelliJ IDEA 2018.3 Early Access Program is now open!
Read more
  • 0
  • 0
  • 13684

article-image-the-5-biggest-announcements-from-tensorflow-dev-summit-2018
Sugandha Lahoti
02 Apr 2018
4 min read
Save for later

The 5 biggest announcements from TensorFlow Developer Summit 2018

Sugandha Lahoti
02 Apr 2018
4 min read
The second TensorFlow Developer Summit was filled with exciting product announcements and technical talks from the TensorFlow team and guest speakers. Here are 5 major features extended to the TensorFlow machine learning framework, announced at the Summit. TensorFlow.js: Machine Learning brought to your browsers Using TensorFlow.js, developers can now define, train, and run machine learning models entirely in the browser. This open-source library can be run using Javascript and a high-level layers API. What does this mean from a developer’s perspective? TensorFlow.js allows importing of an existing, pre-trained model, say a TensorFlow or Keras model into the TensorFlow.js format. Developers can use transfer learning to re-train an imported model, using only a small amount of data. What does this mean from a user’s perspective? No need to install any libraries or drivers. Just open a webpage, and your program is ready to run. TensorFlow.js automatically supports WebGL, so it will accelerate your code when a GPU is available. With TensorFlow.js, users may also open their webpage from a mobile device, where the model will take advantage of sensor data from the mobile’s gyroscope or an accelerometer. All the data stays on the client, making TensorFlow.js useful for privacy preserving and low-latency inference. You can see TensorFlow.js in action by trying out the Emoji Scavenger Hunt game from a browser on your mobile phone. TensorFlow Hub: A library for reusable Machine Learning modules in TensorFlow The next major announcement at the TensorFlow Developer summit was the TensorFlow Hub. This platform is an aggregator to publish, discover, and reuse parts of machine learning modules in TensorFlow. Module here refers to a self-contained piece of a TensorFlow graph, along with its weights, that can be reused across other similar tasks. Model reusing helps a developer train a model using a smaller dataset, improve generalization, or speed up training. TensorFlow Hub comes with two tools that help in finding potential issues in neural networks. The first is a graphical debugger for inspecting the artificial neurons of an AI. The other visualize how well the model as a whole analyzes large amounts of data. TensorFlow Model Analysis TFMA is an open-source library that combines the power of TensorFlow and Apache Beam to compute and visualize evaluation metrics. TFMA ensures that ML models meet specific quality thresholds and behaves as expected for all relevant slices of data. TFMA uses Apache Beam to do a full pass over the specified evaluation dataset. This allows more accurate calculation of metrics and also scales up to massive evaluation datasets. TFMA allows developers to visualize model metrics over time in a time series graph. It visualizes metrics computed for a single model over multiple versions of the exported SavedModel. TFMA uses Slicing metrics to analyze the performance of a model on a more granular level. TensorFlow is now available in more languages and platforms TensorFlow Developer Summit also brought a good news for swift programmers. As of April 2018, TensorFlow for Swift will be open sourced. TensorFlow for Swift is more than just language binding for TensorFlow. It integrates first-class compiler and language support, providing the full power of graphs with the usability of eager execution. TensorFlow Lite, TensorFlow’s cross-platform solution for deploying trained ML models on mobile, also has major updates. It will now feature full support for Raspberry Pi and increased support for ops/models (including custom ops). The TensorFlow Lite core interpreter is now only 75 KB in size (vs 1.1 MB for TensorFlow) with speedups of up to 3x when running quantized image classification models. New applications and domains opened using TensorFlow TensorFlow Developer Summit also made announcements pertaining to sectors beyond the core deep learning and neural network models. The TensorFlow Probability API provides state-of-the-art methods for Bayesian analysis. This library contains building blocks like probability distributions, sampling methods, and new metrics and losses. They’ve also released Nucleus, a library for reading, writing, and filtering common genomics file formats for use in TensorFlow. This is released along with DeepVariant, an open-source TensorFlow based tool for genome variant discovery. Both these tools intend to help spur new research and advances in genomics. The TensorFlow Developer Summit also showcased a new blog, YouTube channel, and other community resources.  
Read more
  • 0
  • 0
  • 13683
article-image-haiku-open-source-beos-clone-to-release-in-beta-after-17-years-of-development
Melisha Dsouza
10 Sep 2018
4 min read
Save for later

Haiku, the open source BeOS clone, to release in beta after 17 years of development

Melisha Dsouza
10 Sep 2018
4 min read
The Haiku OS project initially launched in August 2001, then named as “OpenBeOS”, is nearing a beta release expected later this month. It's been over 17 years since the project launched, and more than 18 years since the last release of the operating system- BeOS that inspired it. BeOs launched in 1995 by Be Inc, almost became the operating system for Apple’s hardware. However, the negotiations between Be Inc and Apple turned into nothing and the iPhone giant decided in favour of NeXT. Used primarily in the area of multimedia by software developers and users, BeOS had an impressive user interface, pre-emptive multitasking, symmetric multiprocessing and a 64-bit journaling file system. Cloning BeOS, Haiku's boot performance is very good. The Haiku user interface is modeled entirely after BeOS, acquiring its signature variable-width title bars and spatial file management. Be Inc was shuttered in 2001. Although BeOS is dead- Haiku is very much alive, with around 50 people contributing to a patch every year. The biggest challenge that Haiku faces is the length of time between its releases. The most recent release, Haiku R1 Alpha 4.1, dates back to November 2012. In response to the considerable amount of time taken for this release, Haiku developer Adrien Destugues asserts that they needed to figure out various details regarding the project. From how to get the process back on track, to figuring out the buildbot setup, how to distribute the release to mirrors, where to get CDs pressed , and how to ship these to users who want to buy them. While the expected release date is somewhere towards the end of this month, Destugues is also open to delaying the release for another month or so in order to ship a quality product.  However, he confirms that, beginning with the first beta, there will be more frequent releases and continuous updates via the OS’s package manager. Why Should Haiku users stick around? Right after the first release of Haiku, the development team released a poll with a list of features, for which developers and users would vote to decide if they were ‘R1’ or ‘not R1’ Along with fixing a lot of bugs encountered in the previous version, users can now look forward to new features, including- Support for Wi-Fi A modern web browser with CSS and HTML5 support Improvements to the APIs which include support for system notifications, applications localisation, easier controls in the GUI, ‘stack and tile’ window management ‘Launch daemon’ in charge of starting and monitoring system services 64-bit CPU support, support for more than eight CPU cores USB3 and SATA support Support for more than 1GB of RAM Haiku now includes Package manager. All of these features will help the 200 odd users to run Haiku on a modern machine. The Haiku OS is famous among its users for its ease of use, responsiveness, and overall coherence. HaikuPorts and HaikuArchives currently include a range of software that can be used with the OS- including small 2D games, porting tools for embedded systems and the occasional Python library needed. Haiku has also made it possible to achieve porting Qt, LibreOffice, or other large applications over from the Linux world. While working with Haiku, developers often encounter system bugs in the process. This means if one is developing an application and is faced with resolving a bug, sooner or later they will be fixing the OS as well. Naturally, there are some viewers who are apprehensive of the timeline committed by Haiku, as they have waited long enough for the release. Source: Reddit After a span of 17 years, it would be interesting to see the number of Haiku users that have stuck around for the Beta release. Head over to computerworld for deeper insights on this project. Sugar operating system: A new OS to enhance GPU acceleration security in web apps cstar: Spotify’s Cassandra orchestration tool is now open source! Storj Labs’ Open Source Partner Program generates revenue opportunities for open source companies
Read more
  • 0
  • 0
  • 13678

article-image-irelands-data-protection-commission-initiates-an-inquiry-into-googles-online-ad-exchange-services
Savia Lobo
23 May 2019
3 min read
Save for later

Ireland’s Data Protection Commission initiates an inquiry into Google’s online Ad Exchange services

Savia Lobo
23 May 2019
3 min read
Ireland’s Data Protection Commission (DPC) opened an inquiry into Google Ireland Ltd. over user data collection during online advertising. The DPC will enquire whether Google’s online Ad Exchange was compliant to general data protection regulations (GDPR). The Data Protection Commission became the lead supervisory authority for Google in the European Union in January, this year. This is the Irish commission’s first statutory inquiry into Google since then. DPC also offers a so-called "One Stop Shop" for data protection regulation across the EU. This investigation follows last year’s privacy complaint filed under Europe’s GDPR pertaining to Google Adtech’s real-timing bidding (RTB) system. This complaint was filed by a host of privacy activists and Dr. Johnny Ryan of private browser Brave. Ryan accused Google’s internet ad services business, DoubleClick/Authorized Buyers, of leaking users’ intimate data to thousands of companies. Google bought the advertising serving and tracking company, DoubleClick, for $3.1bn (£2.4bn) in 2007. DoubleClick uses web cookies to track browsing behavior online by IP addresses to deliver targeted ads. Also, this week, a new GDPR complaint against Real-Time Bidding (RTB) was filed in Spain, Netherlands, Belgium, and Luxembourg. https://twitter.com/mikarv/status/1130374705440018433 Read More: GDPR complaint in EU claim billions of personal data leaked via online advertising bids Ireland’s statutory inquiry is pursuant to section 110 of the Data Protection Act 2018 and will also investigate based on the various suspicions received. “The GDPR principles of transparency and data minimization, as well as Google’s retention practices, will also be examined”, the DPC blog mentions. It has been a year since GDPR was introduced on May 25, 2018, which gave Europeans new powers in how they can control their data. Ryan said in a statement, “Surveillance capitalism is about to become obsolete. The Irish Data Protection Commission’s action signals that now — nearly one year after the GDPR was introduced — a change is coming that goes beyond just Google. We need to reform online advertising to protect privacy, and to protect advertisers and publishers from legal risk under the GDPR”. https://twitter.com/johnnyryan/status/1131246597139062791 Google was also fined a sum of 50 million euros ($56 million) earlier this year by France’s privacy regulator, in the first penalty for a U.S. tech giant since the EU’s GDPR law was introduced. Also, in March, the EU fined Google 1.49 billion euros for antitrust violations in online advertising, a third antitrust fine by the European Union against Google since 2017. Read More: European Union fined Google 1.49 billion euros for antitrust violations in online advertising A Google spokesperson told CNBC, “We will engage fully with the DPC’s investigation and welcome the opportunity for further clarification of Europe’s data protection rules for real-time bidding. Authorized buyers using our systems are subject to stringent policies and standards.” To know more about this news, head over to DPC’s official press release. EU slaps Google with $5 billion fine for the Android antitrust case U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices
Read more
  • 0
  • 0
  • 13677

article-image-jdk-12-is-all-set-for-public-release-in-march-2019
Prasad Ramesh
17 Sep 2018
3 min read
Save for later

JDK 12 is all set for public release in March 2019

Prasad Ramesh
17 Sep 2018
3 min read
With JDK 11 reaching general availability next week, there is also a proposed schedule released for JDK 12. The proposed schedule indicates a final release in March 2019 along with two JDK Enhancement Proposals (JEPs) proposed for JDK 12. Mark Reinhold, Chief Architect of the Java Platform Group at Oracle, made an announcement in a mail posted to the OpenJDK mailing list. Per the mail, JDK 12 should be out to the public on March 19, 2019. The proposed schedule for JDK 12 will be as follows: 13th December 2018 Rampdown Phase One 17th January 2019 Rampdown Phase Two 31st January 2019 Release-Candidate Phase 19th March 2019 General Availability JDK 11 had a total of 17 JEPs contributed out of which three were from the community, the highest number in any JDK release. The other 14 were from Oracle according to a tweet by @bellsoftware. For JDK 12, there are two JEPs integrated which will be available as a preview language feature and four candidate JEPs. JDK 12 preview features JEP 325: Switch Expressions (Preview) This JEP is going to allow the switch statement to be used as both statements and as an expression. Both forms can use either a “traditional” or “simplified” scoping and control flow behavior. The changes to the switch statement will simplify everyday coding. It will also pave the way for the use of pattern matching in switch. JEP 326: Raw String Literals (Preview) This JEP adds raw string literals to Java. A raw string literal can span many source code lines. It does not interpret escape sequences, such as \n, or Unicode escapes, of the form \uXXXX. This does not introduce any new String operators. There will be no change in the interpretation of traditional string literals. JDK 12 JEP candidates JEP 343: Packaging Tool To create a new tool based on the JavaFX javapackager tool for packaging self-contained Java applications. JEP 342: Limit Speculative Execution To help both developers and deployers to defend against speculative-execution vulnerabilities. This is to be done by providing a means to limit speculative execution and not a complete defense against all forms of speculative execution. JEP 340: One AArch64 Port, Not Two Remove all arm64 port related sources while retaining the 32-bit ARM port and the 64-bit AArch64 port. This will help focus on a single 64-bit ARM implementation and eliminate duplicate work to maintain two ports. JEP 341: Default CDS Archives Enhance the JDK build process to generate a class data-sharing (CDS) archive by using the default class list, on 64-bit platforms. The goal is to improve out-of-the-box startup time and eliminating the need for users to run -Xshare:dump to benefit from CDS. To know more details on the proposed schedule for JDK 12, visit the OpenJDK website. JEP 325: Revamped switch statements that can also be expressions proposed for Java 12 Mark Reinhold on the evolution of Java platform and OpenJDK No more free Java SE 8 updates for commercial use after January 2019
Read more
  • 0
  • 0
  • 13669
article-image-apple-to-merge-the-iphone-ipad-and-mac-apps-by-2021
Natasha Mathur
21 Feb 2019
2 min read
Save for later

Apple to merge the iPhone, iPad, and Mac apps by 2021

Natasha Mathur
21 Feb 2019
2 min read
Apple is planning on merging the apps made for iPhone, iPad, and Mac by 2021, as a part of its project, codenamed Marzipan. Apple revealed details related to the project at the 2018 Worldwide Developer Conference (WWDC), reports Bloomberg. Bloomberg notes that this move by Apple will make it easier for the software developers to build tools and apps since they’ll only have to build an app once and then have it work on the iPhone, iPad, and Mac computers. It will also boost the overall revenue of the firm as it takes a cut of various app-related purchases and subscriptions. Moreover, Apple is going to be launching a software development kit this June at WWDC, to allow developers to port their iPad apps to Mac computers. The software development kit will prevent the developers from writing the software code twice, however, they will have to submit separate versions of the app to Apple’s iOS and Mac App Stores. As reported by Bloomberg, Apple will expand the kit so that iPhone applications can be converted into Mac apps in 2020. Finally, by 2021, developers will be able to merge iPhone, iPad, and Mac applications into one app. Following this, developers will no longer need to submit separate versions of the app to different Apple App Stores and will allow iOS apps to be downloaded directly from Mac computers. “The most direct benefit of the Marzipan project will be to make life easier for the millions of developers who write software for Apple’s devices. For example, later this year Netflix Inc. would be able to more easily offer a Mac app for watching the video by converting its iPad app”, states Bloomberg. Also, Twitter could publish a single app for all its Apple customers by 2021. Apple acquires Pullstring to possibly help Apple improve Siri and other IoT-enabled gadgets Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Apple revoked Facebook developer certificates due to misuse of Apple’s Enterprise Developer Program; Google also disabled its iOS research app
Read more
  • 0
  • 0
  • 13664

article-image-dc-airport-nabs-first-imposter-using-its-newly-deployed-facial-recognition-security-system
Melisha Dsouza
27 Aug 2018
3 min read
Save for later

DC Airport nabs first imposter using its newly deployed facial recognition security system

Melisha Dsouza
27 Aug 2018
3 min read
The initial apprehension to having facial recognition technology is beginning to move on to acceptance as the incident at the D.C airport stands witness of this fact.  Just three days after the technology was implemented at Washington Dulles International Airport, the system identified an imposter attempting to make his way into the US using a fake passport. On August 23, the US Customs and Border Protection (CBP) released a news about the 26-year-old male, who was traveling from Sao Paulo, Brazil, who presented a French passport to the CBP officer in the primary investigation phase. The facial comparison biometric system confirmed that his face did not match the picture in the passport. He was then sent to secondary inspections for a thorough examination. He appeared nervous during the checks and doubts were confirmed when a search revealed the man's authentic Republic of Congo identification card concealed in his shoe. NEC has collaborated with a total of 14 airports across the US to use the facial recognition technology in order to screen out people arriving in the US with false documents. This has reduced the average wait time for arriving international passengers by around four minutes. According to the International Trade Administration that Quartz quoted back in February 2017,  about 104,525 people arrive from overseas into the US (that number excludes people entering from Mexico and Canada) every day. Scanning such a large number of travelers each day is a daunting task for the CBP. Facial Recognition technology will definitely reduce the complexity that comes with traveler identification. A gist of how the biometric system works The CBP first constructs a photo gallery of all the travelers on US-bound international aircraft using flight manifests and travelers’ documents (mainly passports and visas). When they touch down in America, TSA officers guide travelers to a camera next to a document checking podium. This camera snaps a picture and compares it to the one on their travel documents to determine if they’re indeed who they claim to be. The CBP asserts that the system will not only help in nabbing terrorists and criminals before they can enter the US, but also speed up airport checks, and eventually allow travelers to get through security processes without a boarding pass. CBP is  clearly trying its best to use technology to make its operations more efficient and to detect security breaches at a scale never seen before. It remains to be seen if the benefits of using of facial recognition such as protecting the American people from external threats outweighs the dangers of over-reliance on this tech such as wrongly tagging people or infringing on individual freedom. You can gain more insights to this article on techspot.com. Google’s new facial recognition patent uses your social network to identify you! Admiring the many faces of Facial Recognition with Deep Learning Amazon is selling facial recognition technology to police  
Read more
  • 0
  • 0
  • 13660
Modal Close icon
Modal Close icon