Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-rust-survey-2018-key-findings-80-developers-prefer-linux-webassembly-growth-doubles-and-more
Bhagyashree R
28 Nov 2018
4 min read
Save for later

Rust Survey 2018 key findings: 80% developers prefer Linux, WebAssembly growth doubles, and more

Bhagyashree R
28 Nov 2018
4 min read
Yesterday, the Rust Survey team published the results of their annual Rust survey of 2018. This year’s survey was launched in 14 different languages which helped in increasing the number of responses to 5991. The survey highlights that there is a slight increase in medium to large investments in Rust, most of the users prefer Linux over Windows for development, and more. Growth in the number of Rust users Rust is seeing a steady growth in the number of Rust users. Nearly 23% of these users have been using it for 3 months or less and up to a quarter of them are using it for at least 2 years. Talking about how much time it takes to get productive in Rust, 40% of Rust users said it takes less than a month of use, and over 70% felt productive in their first year of use itself. Over 22% of the Rust users do not feel productive, out of which only about 25% are in their first month of use. Larger overall investments in Rust projects Rust projects are seeing larger overall investments and trending to larger sizes. The percentage of medium to large investments in Rust has increased from 8.9% in 2016, to 16% in 2017, to 23% this year. There is also some growth in the Rust daily usage from 17.5% last year to nearly a quarter of users this year. In total, Rust weekly total usage has risen from 60.8% to 66.4%. Difficulty level of common Rust concepts Most of the Rust users consider themselves to be intermediates in terms of expertise in Rust. The users felt that Enums and Cargo are the easiest concepts to learn, followed by Iterators, Modules, and Traits, Trait Bounds, and Unsafe. The most difficult concepts are Macros, Ownership & Borrowing, and Lifetimes. Usage Patterns of Rust tools Same as last year, users are preferring the current stable release of Rust. There is a slight increase in the number of the Nightly compiler users, which is now over 56% (up from 51.6% of last year). Users are opting Nightly for accessing 2018 edition, asm, async/await, clippy, embedded development, rocket, NLL, proc macros, and wasm. The percentage of users who see a breakage during a routine compiler update remains the same as last year (7.4%). These breakages generally required minor fixes, though some reported having moderate or major fixes to upgrade to the next stable compiler. 90% of the users voted rustup as their first choice for installing Rust. Linux distros is the second option with only 17% of Rust installs. Tools like rustfmt and rustdoc got lots of positive support, following these is the clippy tool. The IDE support tools Rust Language Server and racer also had positive support but unfortunately, of the tools surveyed, generated a few more dislike votes and comments. The bindgen tool has relatively small userbase. Preferred development platforms of Rust users While there is some increase in Windows usage from 31% last year to 34% this year, Linux platform continues to be popular among Rust developers with 80% of users opting it. While there is not much change from the last year for other target platforms, WebAssembly is an exception. It has shown nearly doubled up growth from last year’s 13% to this year’s 24%. In editors, VSCode has bested Vim, the front-runner in editors for two years, which grew from 33.8% of Rust developers to 44.4% this year. Increase in commercial use of Rust Rust’s part-time usage at the workplace has increased from 16.6% to 21.2%. Its full-time commercial has doubled from 4.4% to 8.9%. In total, its commercial use has grown from 21% to just over 30% of Rust users. Though there is an increase in the commercial use, over a third of Rust users aren’t sure their companies will invest in Rust. To know more in detail, read the annual Rust Survey 2018. Rust Beta 2018 is here 3 ways to break your Rust code into modules Red Hat announces full support for Clang/LLVM, Go, and Rust
Read more
  • 0
  • 0
  • 9874

article-image-amazon-launches-aws-ground-station-a-fully-managed-service-for-controlling-satellite-communications-downlink-and-processing-satellite-data
Amrata Joshi
28 Nov 2018
4 min read
Save for later

Amazon launches AWS Ground Station, a fully managed service for controlling satellite communications, downlink and processing satellite data

Amrata Joshi
28 Nov 2018
4 min read
Yesterday, Amazon introduced a fully managed service, AWS Ground Station used for controlling satellite communications, downlink and processing satellite data. It is also used for scaling satellite operations quickly and cost-effectively. Ground Stations are basically at the core of satellite networks and they provide communication between the ground and the satellite. Currently, Amazon has a pair of ground stations and it will soon have 12 in operation by mid-2019. Each of the ground stations are associated with a particular AWS Region. One can use AWS Ground Station on an as-needed, pay-as-you-go basis instead of building a ground station or entering into a long-term contract. With AWS Ground Station, it is possible to get an access to a ground station on a regular basis for capturing Earth’s observations or for distributing content world-wide at low cost. With AWS Ground Station, one need not build or maintain antennas. How does the AWS Ground Station work? The raw analog data from the satellite is processed by Amazon’s modem digitizer into a data stream. It is then routed to an EC2 instance which is responsible for processing the signal so that it could be turned into a byte stream. Once the data is in digital form, streaming, processing, analytics, and storage options get available. Streaming Amazon Kinesis Data Streams (KDS), a massively scalable and durable real-time data streaming service is used for capturing, processing, and storing data streams. KDS continuously captures gigabytes of data per second from thousands of sources such as website clickstreams, financial transactions, database event streams, social media feeds, location-tracking events and IT logs.The data  which gets collected then becomes available in milliseconds to enable real-time analytics use cases such as real-time anomaly detection, real-time dashboards, dynamic pricing, and more. Processing Amazon Rekognition, based on a highly scalable, deep learning technology developed by Amazon’s computer vision scientists is used to analyze billions of images and videos daily. No machine learning expertise is required to use it. It is a simple and easy to use API which quickly analyzes any image or video file stored in Amazon S3. It also provides highly accurate facial analysis and facial recognition on images and videos. Build, train, and deploy ML models Amazon SageMaker, a fully-managed platform helps developers and data scientists to build, train, and deploy machine learning models at any scale, easily. It also makes it easier for the developers as it removes all the complexity. Analytics / Reporting Amazon Redshift, a fast and scalable data warehouse makes it simple and cost-effective to analyze all the data across the data warehouse and data lake.It delivers ten times faster performance than other data warehouses by using machine learning. It is easy to setup and deploy a new data warehouse in minutes, and run queries across petabytes of data in the Redshift data warehouse. Storage Amazon Simple Storage Service (Amazon S3), an object storage service offers industry-leading scalability, security, data availability, and performance. It can be used by industries to store and protect any amount of data for a range of use cases, such as mobile applications, websites, archive, backup and restore, enterprise applications, etc. Amazon S3 Glacier is also useful as it is secure, durable, and extremely low-cost cloud storage service. It is used for data archiving and long-term backup. Though the idea of AWS Ground Station sounds very interesting but the cost is still a question. Users have to pay per minute of downlink time and which is expensive. So, the idea of low cost fails here. Also, observations might not be that accurate as the orbit determination needs the control of the antenna. To convince the ones who would still be interested in building their own Ground Station and not relying on the third party, would be difficult. To know more about this news, check out Amazon’s official blog. Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations Amazon FreeRTOS adds a new ‘Bluetooth low energy support’ feature
Read more
  • 0
  • 0
  • 9412

article-image-amazon-confirms-plan-to-sell-a-hipaa-eligible-software-amazon-comprehend-medical-which-will-mine-medical-records-of-the-patients
Amrata Joshi
28 Nov 2018
5 min read
Save for later

Amazon confirms plan to sell a HIPAA eligible software, Amazon Comprehend Medical, which will mine medical records of the patients

Amrata Joshi
28 Nov 2018
5 min read
Yesterday, at the AWS re:Invent conference, Amazon announced the HIPAA (Health Insurance Portability and Accountability Act) eligible software, Amazon Comprehend Medical. Amazon Comprehend Medical is a natural language processing service that uses machine learning to extract relevant medical records of patients from unstructured text. With this software, one can gather information, such as medical condition, medication, strength, dosage, and frequency from a variety of sources like clinical trial reports, doctors’ notes, and patient health records. This extracted medical information can be used to build applications for clinical decision support, revenue cycle management, and clinical trial management. Comprehend Medical follows the ‘pay for how much you use strategy’ and doesn’t charge any minimum fees. Also, the developers need to only provide unstructured medical text to Comprehend Medical and they don’t have to deal with servers to manage them. It also identifies protected health information (PHI), such as name, age, and medical record number. One can use this information to create applications that securely process, maintain, and transmit PHI. Benefits of Amazon Comprehend Medical Machine learning helps in accurately identifying the medical information Amazon Comprehend Medical uses advanced machine learning models for accurately identifying the medical information, such as medical conditions and medications. It also identifies their relationship to each other, for instance, prescribes the medicine dosage and strength for a better cure. One can access Amazon Comprehend Medical easily through a simple API call, without the need for machine learning expertise, complicated rules, and training models. Secures patient’s data Amazon Comprehend Medical identifies protected health information (PHI) stored in medical record systems while keeping up to the standards for General Data Protection Regulation (GDPR). Comprehend Medical helps the developers to implement data privacy and security solutions by extracting relevant patient identifiers as per HIPAA’s Safe Harbor method of de-identification. Also, it does not store or save any customer data,so users need not worry. Lowers medical document processing costs Comprehend Medical automates and lowers the cost for coding the unstructured medical text from patient records, billing, and clinical indexing. It also offers two APIs that developers can integrate into the existing workflows and applications with only a few lines of code. This would cost less than a penny for every 100 characters of analyzed text. A reinvention of cancer care The team at Amazon is also working with Seattle’s own Fred Hutchinson Cancer Research Center to support their goals to eradicate cancer. Amazon Comprehend Medical helps in identifying patients for clinical trials who may benefit from specific cancer therapies. “Amazon Comprehend Medical will reduce this time burden from hours per record to seconds. This is a vital step toward getting researchers rapid access to the information they need when they need it so they can find actionable insights to advance lifesaving therapies for patients,” said Matthew Trunnell, Chief Information Officer, Fred Hutchinson Cancer Research Center. What about Privacy then? This looks like a good move by Amazon but people are questioning the technology used by Amazon. Why is only ML/NLP used to analyze data? There is a lot of unstructured data available in EMRs including pharmacy, lab, eMAR, etc, well what about them? The efforts still by Amazon still aren’t enough to convince many. A user commented on Hacker News, “I work in the tech healthcare industry. I wonder why they only went with (or focused on?) ML/NLP text analysis to analyze data. There is a wealth of discrete data available in EMRs (pharmacy, lab, eMAR, etc.). Yes, there is plenty of diagnosis text but that is almost always associated with ICD10 codes. The only area where I believe text analysis would be useful is documentation and microbiology data, and in many cases micro is discrete as well.” Though Amazon Comprehend Medical is matching the standard of GDPR, people are still skeptical about the possibility of patient data being misused. Just five months ago, Amazon took over PillPack, a pharmacy for $1 Billion. Is the idea to use the hospitals next? If yes, then possibly the data of the patients might get endangered for few billions. Amazon could also use the medical data of the patients for its own advertising. Also, the users upload their health records on the Amazon cloud service and run the software there to analyze the data. The text then gets analyzed and the result is in the format of a spreadsheet. Any sort of security attack might cause trouble as there is a chance of data breach. Though Amazon Comprehend Medical is HIPAA eligible, it is not compliant. It could sometimes be inaccurate as it might not always meet the requirements for de-identification of protected health information under HIPAA. https://twitter.com/DrSafeSpineCare/status/1067523439152562177 To know more about this news, check Amazon’s official blog post. Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads  
Read more
  • 0
  • 0
  • 11194

article-image-twitter-blocks-predictim-an-online-babysitter-rating-service-for-violating-its-user-privacy-policies-facebook-may-soon-follow-suit
Natasha Mathur
28 Nov 2018
3 min read
Save for later

Twitter blocks Predictim, an online babysitter-rating service, for violating its user privacy policies; Facebook may soon follow suit

Natasha Mathur
28 Nov 2018
3 min read
Twitter and Facebook accused Predictim of violating their policies on user surveillance and privacy, yesterday. Predictim is an online service that gives you an overall risk score for the babysitter by scanning their social media profiles (Facebook, Twitter, Instagram, etc.) using language processing algorithms.   Predictim’s algorithms analyze “billions” of data points dating back to years in a person’s online profile. It then provides an evaluation, within minutes, of a babysitter’s predicted traits, behaviors, and areas of compatibility based on her digital history. It makes use of language-processing algorithms and computer vision to evaluate babysitters’ Facebook, Twitter and Instagram posts for clues and information on their personal life.  Facebook discovered Predictim’s activities on its platform earlier this month and retracted most of Predictim’s access to users, as first reported by BBC.  Facebook is now considering blocking the firm entirely from its platform after realizing that Predictim was still scraping public Facebook data to power its algorithms. “Scraping people's information on Facebook is against our terms of service. We will be investigating Predictim for violations of our terms, including to see if they are engaging in scraping,” a Facebook spokeswoman told BBC.  Twitter, on the other hand, told the BBC that it “recently” decided to block Predictim’s access to its users. Twitter also mentioned how they are strictly against companies making use of its data and APIs for surveillance or background checks.  Predictim responded to this saying that Twitter and Facebook are already mining their data and “ganged up” on them because there’s no other benefit for them. It also mentioned how they’re just trying to “take advantage of that data to help parents pick a better babysitter and make a little money in the process”.   Moreover, Drew Harwell, reporter, Washington Post, pointed out that Predictim appears to violate a ban on employers that demand job applicants verify or give access to their personal social media profiles. These demands seem to violate the law in 26 states, in the US, according to data from the National Conference of State Legislatures. https://twitter.com/drewharwell/status/1067557804381298688 However, as per the CEO of Predictim, Sal Parsa, Predictim is “perfectly legal” because the service does not "demand" access to babysitters' social-media but just requires it for the most complete and accurate results.   Predictim has already received heavy criticism over the concern that it is not only prone to biases over how an ideal babysitter should behave, look or share (online). But the personality scan results are also inaccurate mostly. This, in turn, leads to the software misunderstanding a person’s personality based on her/his social media use.  However, the firm insists that Predictim is not designed to be used to make hiring decisions.   "Kids have inside jokes. They’re notoriously sarcastic. Something that could sound like a ‘bad attitude’ to the algorithm could sound to someone else like a political statement or valid criticism”, Jamie Williams, Electronic Frontier Foundation, told BBC.   For more information, check out the official story by BBC.   Facebook AI researchers investigate how AI agents can develop their own conceptual shared language Twitter CEO, Jack Dorsey slammed by users after a photo of him holding ‘smash Brahminical patriarchy’ poster went viral Twitter plans to disable the ‘like’ button to promote healthy conversations; should retweet be removed instead? 
Read more
  • 0
  • 0
  • 9694

article-image-ubers-new-family-of-ai-algorithms-sets-records-on-pitfall-and-solves-the-entire-game-of-montezumas-revenge
Natasha Mathur
28 Nov 2018
6 min read
Save for later

Uber’s new family of AI algorithms sets records on Pitfall and solves the entire game of Montezuma’s Revenge

Natasha Mathur
28 Nov 2018
6 min read
Uber’s AI research team introduced Go-Explore, a new family of algorithms, capable of achieving scores over 2,000,000 on Atari game Montezuma’s Revenge and an average score of over 21,000 on Atari 2600 game Pitfall, earlier this week. This is the first time when any learning algorithm has managed to score above 0 in Pitfall. Go-explore outshines the other state of the art algorithms on Montezuma’s revenge and Pitfall by two orders of magnitude and 21,000 points. Go-Explore uses the human domain knowledge but isn’t entirely dependent on it. This highlights the ability of Go-Explore to score well despite very little prior knowledge. For instance, Go-Explore managed to score over 35,000 points on Montezuma’s Revenge, with zero domain knowledge. As per the Uber team, “Go-Explore differs radically from other deep RL algorithms and could enable rapid progress in a variety of important, challenging problems, especially robotics. We, therefore, expect it to help teams at Uber and elsewhere increasingly harness the benefits of artificial intelligence”. A common challenge One of the most challenging and common problems associated with Montezuma’s Revenge and Pitfall is that of a “sparse reward” faced during the game exploration phase. Sparse reward refers to few reliable reward signals or feedback that could help the player complete the stage and advance within the game quickly. To make things even more complicated, any rewards that are offered during the game are usually “deceptive”, meaning that it misleads the AI agents to maximize rewards in a short period of time, instead of focusing on something that could make them jump the game level (eg; hitting an enemy nonstop, instead of working towards getting to the exit). Now, usually, to tackle such a problem, researchers add “intrinsic motivation” (IM) to agents, meaning that they get rewarded on reaching new states within the game. Adding IM has helped the researchers successfully tackle the sparse reward problems in many games, but they still haven’t been able to do so in the case of Montezuma’s Revenge and Pitfall. Uber’s Solution: Exploration and Robustification According to the Uber team, “a major weakness of current IM algorithms is detachment, wherein the algorithms forget about promising areas they have visited, meaning they do not return to them to see if they lead to new states. This problem would be remedied if the agent returned to previously discovered promising areas for exploration”. Uber researchers have come out with a method that separates the learning in Agents into two steps: exploration and robustification.                                             Exploration and Robustification Exploration Go-Explore builds up an archive of multiple different game states called “cells” along with paths leading to these states. It selects a particular cell from the archive, goes back to that cell, and then explores from that cell. For all cells that have been visited (including new cells), if the new trajectory is better (e.g. higher score), then its chosen to reach that cell. This helps GoExplore remember and return to promising areas for exploration (unlike intrinsic motivation), avoid over-exploring, and also makes them less susceptible to “deceptive” rewards as it tries to cover all the reachable states. Results of the Exploration phase Montezuma’s Revenge: During the exploration phase of the algorithm, Go-Explore reaches an average of 37 rooms and solves level 1 (comprising 24 rooms, not all of which need to be visited) 65 percent of the time in Montezuma’s Revenge. The previous state of the art algorithms could explore only 22 rooms on average. Pitfall: Pitfall requires significant exploration and is much harder than Montezuma’s Revenge since it offers only 32 positive rewards that are scattered over 255 rooms. The complexity of this game is so high, that no RL algorithm has been able to collect even a single positive reward in this game. During the exploration phase of the algorithm, Go explore is able to visit all 255 rooms and manages to collect over 60,000 points. With zero domain knowledge, Go-Explore finds an impressive 22 rooms but does not find any reward. https://www.youtube.com/watch?v=L_E3w_gHBOY&feature=youtu.be Uber AI Labs Robustification If the solutions found via exploration are not robust to noise, you can robustify them, meaning add in domain knowledge using a deep neural network with an imitation learning algorithm, a type of algorithm that can learn a robust model-free policy via demonstrations. Uber researchers chose Salimans & Chen’s “backward” algorithm to get started with, although any imitation learning algorithm would do. “We found it somewhat unreliable in learning from a single demonstration. However, because Go-Explore can produce plenty of demonstrations, we modified the backward algorithm to simultaneously learn from multiple demonstrations ” writes the Uber team. Results of robustification Montezuma’s Revenge: Robustifying the trajectories that are discovered with the domain knowledge version of Go-Explore, it manages to solve the first 3 levels of Montezuma’s Revenge. Now, since,  all levels beyond level 3 in this game are nearly identical, Go-Explore has solved the entire game. “In fact, our agents generalize beyond their initial trajectories, on average solving 29 levels and achieving a score of 469,209! This shatters the state of the art on Montezuma’s Revenge both for traditional RL algorithms and imitation learning algorithms that were given the solution in the form of a human demonstration,” mentions the Uber team. Pitfall: Once the trajectories had been collected in the exploration phase, researchers managed to reliably robustify the trajectories that collect more than 21,000 points. This led to Go-explore outperforming both the state of the art algorithms as well as average human performances, setting an AI record on Pitfall for scoring more than 21,000 points on Pitfall. https://www.youtube.com/watch?v=mERr8xkPOAE&feature=youtu.be Uber AI Labs “Some might object that, while the methods already work in the high-dimensional domain of Atari-from-pixels, it cannot scale to truly high-dimensional domains like simulations of the real world. We believe the methods could work there, but it will have to marry a more intelligent cell representation of interestingly different states (e.g. learned, compressed representations of the world) with intelligent (instead of random) exploration”, writes the Uber team. For more information, check out the official blog post. Uber becomes a Gold member of the Linux Foundation Uber announces the 2019 Uber AI Residency Uber posted a billion dollar loss this quarter. Can Uber Eats revitalize the Uber growth story?
Read more
  • 0
  • 0
  • 10240

article-image-amazon-reinvent-announces-amazon-dynamodb-transactions-cloudwatch-logs-insights-and-cloud-security-conference-amazon-reinforce-2019
Melisha Dsouza
28 Nov 2018
4 min read
Save for later

Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019

Melisha Dsouza
28 Nov 2018
4 min read
Day 2 of the Amazon AWS re:Invent 2018 conference kicked off with just as much enthusiasm with which it began. With some more announcements and releases scheduled for the day, the conference is proving to be a real treat for AWS Developers. Amongst announcements like Amazon Comprehend Medical, New container products in the AWS marketplace; Amazon also announced Amazon DynamoDB Transactions and Amazon CloudWatch Logs Insights. We will also take a look at Amazon re:Inforce 2019 which is a new conference solely to be launched for cloud security. Amazon DynamoDB Transactions Customers have used Amazon DynamoDB for multiple use cases, from building microservices and mobile backends to implementing gaming and Internet of Things (IoT) solutions. Amazon DynamoDB is a non-relational database delivering reliable performance at any scale. It offers built-in security, backup and restore, and in-memory caching along with being a fully managed, multi-region, multi-master database that provides consistent single-digit millisecond latency. DynamoDB with native support for transactions will now help developers to easily implement business logic that requires multiple, all-or-nothing operations across one or more tables. With the help of DynamoDB transactions, users can take advantage of the atomicity, consistency, isolation, and durability (ACID) properties across one or more tables within a single AWS account and region. It is the only non-relational database that supports transactions across multiple partitions and tables. Two new DynamoDB operations have been introduced for handling transactions: TransactWriteItems, a batch operation that contains a write set, with one or more PutItem, UpdateItem, and DeleteItem operations. It can optionally check for prerequisite conditions that need to be satisfied before making updates. TransactGetItems, a batch operation that contains a read set, with one or more GetItem operations. If this request is issued on an item that is part of an active write transaction, the read transaction is canceled. Amazon CloudWatch Logs Insights Many AWS services create logs. Data points, patterns, trends, and insights embedded within these logs can be used to understand how an applications and a users AWS resources are behaving, identify room for improvement, and to address operational issues. However, the raw logs have a huge size, making analysis difficult. Considering individual AWS customers routinely generate 100 terabytes or more of log files each day, the operations become complex and time-consuming. Enter CloudWatch Logs Insights designed to work at cloud scale, without any setup or maintenance required. It goes through massive logs in seconds and provides a user with fast, interactive queries and visualizations. CloudWatch Logs Insights includes a sophisticated ad-hoc query language, with commands to perform complicated operations efficiently. It is a fully managed service and can handle any log format, and auto-discovers fields from JSON logs. What's more? Users can also visualize query results using line and stacked area charts, and add queries to a CloudWatch Dashboard. AWS re:Inforce 2019 In addition to these releases, Amazon also announced that AWS is launching a conference dedicated to cloud security called ‘AWS re:Inforce’, for the very first time. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Exhibit and Conference Center. Here is what the AWS re:Inforce 2019 conference is expected to cover: Deep dive into the latest approaches to security best practices and risk management utilizing AWS services, features, and tools. Direct access to customers to the latest security research and trends from subject matter experts, along with the opportunity to participate in hands-on exercises with our services. There are multiple learning tracks to be covered over this 2-day conference including a technical track and business enablement track, designed to meet the needs of security and compliance professionals, from C-suite executives to security engineers, developers, risk and compliance officers, and more. The conference will also feature sessions on Identity & Access Management, Infrastructure Security, Detective Controls, Governance, Risk & Compliance, Data Protection & Privacy, Configuration & Vulnerability Management, and much more. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer
Read more
  • 0
  • 0
  • 11903
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ex-facebook-manager-says-facebook-has-a-black-people-problem-and-suggests-ways-to-improve
Melisha Dsouza
28 Nov 2018
7 min read
Save for later

Ex-Facebook manager says Facebook has a “black people problem” and suggests ways to improve

Melisha Dsouza
28 Nov 2018
7 min read
On 8th November, Mark Luckie, a former strategic partner manager for Facebook, posted an internal memo to Facebook Employees which opined how Facebook is “failing its black employees and its black users.” The memo was sent shortly before he left the company and just days after the New York Times report which threw Facebook under scrutiny for its leadership morals. Facebook and its ‘black people problem’ Mark Luckie, whose job was to handle the firm’s relationship with “influencers” focused on underrepresented voices, detailed a wide range of problems faced both, internally and externally, by the Black Community at Facebook. He pointed out that Blacks are some of the most engaged and active members of Facebook's 2.2 billion-member community- more specifically, 63 percent of African Americans use Facebook to communicate with their family, and 60 percent use it to talk to their friends once a day, compared to 53 and 54 percent of the total U.S. population respectively (according to Facebook’s own research). Yet, many are unable to find a "safe space" for dialogue on the platform, find their accounts suspended indefinitely and their content being removed without notice. Luckie’s memo states: “When determining where to allocate resources, ranking data such as followers, the greatest number of likes and shares, or yearly revenue are employed to scale features and products, the problem with this approach is Facebook teams are effectively giving more resources to the people who already have them. In doing so, Facebook is increasing the disparity of access between legacy individuals/brands and minority communities.” "Facebook can't engender the trust of its black users if it can't maintain the trust of its black employees." In the memo, Luckie congratulated the tech giant for increasing the number of black employees from 2 percent to 4 percent in 2018.  That being said, he went to list down the many issues faced by employees and criticized the firm's human resources department for protecting managers instead of supporting employees in lieu of such incidents. He said, "I've heard far too many stories from black employees of a colleague or manager calling them "hostile" or "aggressive" for simply sharing their thoughts in a manner not dissimilar from their non-black team members, a few black employees have reported being specifically dissuaded by their managers from becoming active in the [internal] Black@ group or doing "black stuff," even if it happens outside of work hours." He pointed out the hypocrisy in the firm where buildings are covered with ‘Black Lives Matter' posters compared to actually appointing more black employees. The existing black employees are often hassled by security and viewed with suspicion by fellow employees. “To feel like an oddity at your own place of employment because of the color of your skin while passing posters reminding you to be your authentic self feels in itself inauthentic” He claimed that Black staffers at Facebook subdue their voices for the fear of risking or jeopardizing their professional relationships and career advancement. After-effects of Mark’s memo Mr Luckie’s comments created waves around social media. What followed was a pattern we are all familiar with: ‘deny and deflect the blame’. First came the public statement, from Facebook spokesman Anthony Harrison: “Over the last few years, we’ve been working diligently to increase the range of perspectives among those who build our products and serve the people who use them throughout the world. The growth in the representation of people from more diverse groups, working in many different functions across the company, is a key driver of our ability to succeed, we want to fully support all employees when there are issues reported and when there may be micro-behaviors that add up. We are going to keep doing all we can to be a truly inclusive company.” As reported by BBC news, the statement was followed by an internal leak, that while Mr Luckie’s post was made public on Tuesday, it had been circulated at Facebook on 8th  November. At that time, Ime Archibong, Facebook’s director of product partnerships responded to the memo. On Tuesday, Mr Luckie posted his response on Twitter, suggesting Facebook’s tone publicly did not necessarily match what was said to him internally. https://twitter.com/marksluckie/status/1067494650259345408 Mr. Luckie seemed to attempt to protect Mr. Archibong’s identity, however, missed out an ‘Ime’ in his tweet. Mr. Archibong- who is also black- has confirmed he wrote the comments. https://twitter.com/_ImeArchibong/status/1067520926114148352 He was disappointed that the conversation was made public, and described Mr Luckie’s note as “pretty self-serving and disingenuous” while accusing him of having a “selfish agenda and not one that has the best intentions of the community and people you likely consider friends at heart”. The whole situation again suggests that Facebook is more concerned with not looking bad, rather than assessing if it is doing bad and what can it do to make its forum more approachable and safe for different members of the community. Mark’s Recommendations to “improve Facebook’s relationship with diverse communities” Mark ends the memo with some recommendations for the company, some of these include: For any team that has one or more people dedicated specifically to diversity, require a strategic plan for how that work will be incorporated into larger goals for the team. Create metrics for other team members to incorporate into their goals as well that ensure representation is everyone's responsibility. Implement data-driven goals to ensure partnerships, product testing, and client support is reflective of the demographics of Facebook. Level up cultural competency training for Operations teams that review reported infractions on Facebook and Instagram. Whenever possible, avoid relying solely on algorithms or AI to triage these problems. Create internal systems for employees to anonymously report microaggressions. This includes using coded language like “lowering the bar” or “hostile,” disproportionately giving lower performance review scores to women and people of color, or discouraging employees from engaging in cultural activities outside of their agreed upon work schedule. If these reported infractions surface a pattern, require the manager and/or team to attend sensitivity training to amend the behavior. Support emerging talent and brands by creating a pipeline of communication and scaled support that allows them to further build with the platform. Establish more regularly-scheduled focus groups with underrepresented communities, particularly the Black and Latino users who over-engage on Facebook and Instagram. Use these conversations to gain insight on how to grow the platform. After Marks memo went viral, many black employees from big tech companies came forward with their own stories of harassment at the workplace, including athlete Leslie Miller who tweeted: https://twitter.com/shaft/status/1067479669593726976 The memo's publication comes on the same day that a Facebook executive was grilled by parliamentary leaders from nine different countries at a special hearing on disinformation in the United Kingdom. You can head over Facebook’s Blog to read the memo in its entirety. NYT Facebook exposé fallout: Board defends Zuckerberg and Sandberg; Media call and transparency report Highlights Outage plagues Facebook, Instagram and Whatsapp ahead of Black Friday Sale, throwing users and businesses into panic Facebook’s outgoing Head of communications and policy takes blame for hiring PR firm ‘Definers’ and reveals more
Read more
  • 0
  • 0
  • 11842

article-image-react-16-x-roadmap-released-with-expected-timeline-for-features-like-hooks-suspense-and-concurrent-rendering
Sugandha Lahoti
28 Nov 2018
3 min read
Save for later

React 16.x roadmap released with expected timeline for features like “Hooks”, “Suspense”, and “Concurrent Rendering”

Sugandha Lahoti
28 Nov 2018
3 min read
Yesterday, the React team published a roadmap for React 16.x releases. They have split the rollout of new React features into different milestones. The team has made it clear that they have a single vision for how all of these features fit together, but are releasing each part as soon as it is ready, for users to start testing them sooner. The expected milestone React 16.6: Suspense for Code Splitting (already shipped) This new feature can “suspend” rendering while components are waiting for something, and display a loading indicator. It is a convenient programming model that provides better user experience in Concurrent Mode  In React 16.6, Suspense for code splitting supports only one use case: lazy loading components with React.lazy() and <React.Suspense>. React 16.7: React Hooks (~Q1 2019) React Hooks allows users access to features like state and lifecycle from function components. They also let developers reuse stateful logic between components without introducing extra nesting in a tree. Hooks are only available in the 16.7 alpha versions of React. Some of their API is expected to change in the final 16.7 version. Hooks class support might possibly move to a separate package, reducing the default bundle size of React, in future releases. React 16.8: Concurrent Mode (~Q2 2019) Concurrent Mode lets React apps be more responsive by rendering component trees without blocking the main thread. It is opt-in and allows React to interrupt a long-running render to handle a high-priority event. Concurrent Mode was previously referred to as “async mode”. A name change happened to highlight React’s ability to perform work on different priority levels. This sets it apart from other approaches to async rendering. As of now, the team doesn’t expect many bugs in Concurrent Mode, but states that components that produce warnings in <React.StrictMode> may not work correctly. They plan to publish more guidance about diagnosing and fixing issues as part of the 16.8 release documentation. React 16.9: Suspense for Data Fetching (~mid 2019) In the already shipped React 16.6, the only supported use case for Suspense is code splitting. In the future 16.9 release, React will officially support ways to use Suspense for data fetching. The team will provide a reference implementation of a basic “React Cache” that’s compatible with Suspense. Data fetching libraries like Apollo and Relay will be able to integrate with Suspense by following a simple specification. The team  expects this feature to be adopted incrementally, and through layers like Apollo or Relay rather than directly. They also plan to complete two more projects Modernizing React DOM and Suspense for Server Rendering  in 2019. As these projects require more exploration, they aren’t tied to a particular release as of now. For more information, visit the React blog. React Conf 2018 highlights: Hooks, Concurrent React, and more React introduces Hooks, a JavaScript function to allow using React without classes React 16.6.0 releases with a new way of code splitting, and more!
Read more
  • 0
  • 0
  • 17991

article-image-3-announcements-about-amazon-s3-from-reinvent-2018-intelligent-tiering-object-lock-and-batch-operations
Bhagyashree R
28 Nov 2018
3 min read
Save for later

3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations

Bhagyashree R
28 Nov 2018
3 min read
On day 1 of re:Invent 2018, Amazon announced three additions to its Simple Storage Service (S3): Intelligent-Tiering, Object Lock, and Batch Operations. S3 Intelligent Tiering is introduced to automatically optimize storage costs based on data access patterns. S3 Object lock prevents the deletion or overwriting of an object for a specified amount of time. S3 Batch operations make managing billions of data easier with a single API request. Amazon S3 or Simple Storage Service provides object storage through a web interface. It enables users to store and retrieve any amount of data safely and its easy-to-use management features help you easily organize your data. This service is being used in many use cases such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 Intelligent Tiering for automatic cost optimization Amazon S3 comes with different storage classes designed for different use cases. The storage classes supported by S3 are Standard, Standard-IA, One Zone-IA, and Glacier.  In addition to these, Amazon has added S3 Intelligent Tiering storage class, which is meant for automatically optimizing storage costs when data access patterns change. It consists of two tiers, namely, frequent access and infrequent access. It helps you cut the cost by monitoring your access patterns and moving the data that have not been accessed for consecutive 30 days to the infrequent tier. This object is moved back to the frequent access tier when accessed later. Read the official announcement for more details on Intelligent-Tiering. Amazon S3 Object Lock to prevent object version deletion for a customer-defined retention period Amazon S3 Object Lock is a new feature to allow customers store objects using the write-once-read-many (WORM) model. With this feature, you will be able to prevent the object from being deleted or overwritten for a fixed amount of time or indefinitely. This feature is currently in preview and generally available in all AWS Regions and AWS GovCloud (US) Regions. S3 Object Lock comes with two mechanisms to manage object retention: retention periods and legal holds. A retention period is a fixed period of time during which your object will be WORM protected and can’t be deleted or overwritten. A legal hold is the same as the retention period but with no expiration date. An object version can have both a retention period and a legal hold. Read the official announcement for more details on Object Lock. Amazon S3 Batch Operations for object management Amazon S3 Batch Operations makes managing billion of objects stored in Amazon S3 easier, with a single API request or a few clicks in the S3 Management Console. With this feature, you will be able to do things like copy objects between buckets, replace object tag sets, update access controls, restore objects from Amazon Glacier, and invoke AWS Lambda functions. This feature will be available in all AWS commercial and AWS GovCloud (US) Regions. Read the official announcement for more details on Batch Operations. Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018
Read more
  • 0
  • 0
  • 12121

article-image-amazon-introduces-firecracker-lightweight-virtualization-for-running-multi-tenant-container-workloads
Melisha Dsouza
27 Nov 2018
3 min read
Save for later

Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads

Melisha Dsouza
27 Nov 2018
3 min read
The Amazon re:Invent conference 2018 saw a surge of new announcements and releases. The five day event that commenced in Las Vegas yesterday, already saw some exciting developments in the field of AWS, like the AWS RoboMaker, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, an improved AWS Snowball edge and much more. In this article, we will understand their latest release- ‘Firecracker’, a New Virtualization Technology and Open Source Project for Running Multi-Tenant Container Workloads. Firecracker is open sourced under Apache 2.0 and enables service owners to operate secure multi-tenant container-based services. It combines the speed, resource efficiency, and performance enabled by containers with the security and isolation offered by traditional VMs. Firecracker implements a virtual machine manager (VMM) based on Linux's Kernel-based Virtual Machine (KVM). Users can create and manage microVMs with any combination of vCPU and memory with the help of a RESTful API. It incorporates a faster startup time, provides a reduced memory footprint for each microVM, and offers a trusted sandboxed environment for each container. Features of Firecracker Firecracker uses multiple levels of isolation and protection, and hence is really secure by nature. The security model includes a very simple virtualized device model in order to minimize the attack surface, Process Jail and Static Linking functionality. It delivers a high performance, allowing users to launch a microVM in as little as 125 ms It has a low overhead and consumes about 5 MiB of memory per microVM. This means a user can run thousands of secure VMs with widely varying vCPU and memory configurations on the same instance. Firecracker is written in Rust, which guarantees thread safety and prevents many types of buffer overrun errors that can lead to security vulnerabilities. The AWS community has shown a positive response towards this release: https://twitter.com/abbyfuller/status/1067285030035046400 AWS Lambda uses Firecracker for provisioning and running secure sandboxes to execute customer functions. These sandboxes can be quickly provisioned with a minimal footprint, enabling performance along with security. AWS Fargate Tasks also execute on Firecracker microVMs, which allows the Fargate runtime layer to run faster and efficiently on EC2 bare metal instances. To learn more, head over to the Firecracker page. You can also read more at Jeff Barr's blog and the Open Source blog. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 13878
article-image-aws-iot-greengrass-extends-functionality-with-third-party-connectors-enhanced-security-and-more
Savia Lobo
27 Nov 2018
3 min read
Save for later

AWS IoT Greengrass extends functionality with third-party connectors, enhanced security, and more

Savia Lobo
27 Nov 2018
3 min read
At the AWS re:Invent 2018, Amazon announced new features to its AWS IoT Greengrass. These latest features allow users to extend the capabilities of AWS IoT Greengrass and its core configuration options, which include: connectors to third-party applications and AWS services hardware root of trust private key storage isolation and permission settings  New features of the AWS IoT Greengrass AWS IoT Greengrass connectors With the new updated features AWS IoT Greengrass connectors, users can easily build complex workflows on AWS IoT Greengrass without having to worry about understanding device protocols, managing credentials, or interacting with external APIs. These connectors allow users to connect to third-party applications, on-premises software, and AWS services without writing code. Re-use common business logic Users can now re-use common business logic from one AWS IoT Greengrass device to another through the ability to discover, import, configure, and deploy applications and services at the edge. They can even use AWS Secrets Manager at the edge to protect keys and credentials in the cloud and at the edge. Secrets can be attached and deployed from AWS Secrets Manager to groups via the AWS IoT Greengrass console. Enhanced security AWS IoT Greengrass now provides enhanced security with hardware root of trust private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. Users can also use the hardware secure element to protect secrets deployed to the AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. Deploy AWS IoT Greengrass to another container environment With the new configuration option, users can deploy AWS IoT Greengrass to another container environment and directly access device resources such as Bluetooth Low Energy (BLE) devices or low-power edge devices like sensors. They can even run AWS IoT Greengrass on devices without elevated privileges and without the AWS IoT Greengrass container at a group or individual AWS Lambda level. Users can also change their identity associated with an individual AWS Lambda, providing more granular control over permissions. To know more about other updated features, head over to AWS IoT Greengrass website. AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power
Read more
  • 0
  • 0
  • 11007

article-image-amazon-freertos-adds-a-new-bluetooth-low-energy-support-feature
Natasha Mathur
27 Nov 2018
2 min read
Save for later

Amazon FreeRTOS adds a new ‘Bluetooth low energy support’ feature

Natasha Mathur
27 Nov 2018
2 min read
Amazon team announced a newly added ‘bluetooth low energy support’ (BLE) feature to its  Amazon FreeRTOS. Amazon FreeRTOS is an open source, free to download and use IoT operating system for microcontrollers makes it easy for you to program, deploy, secure, connect, and manage small, low powered devices. It extends the FreeRTOS kernel (a popular open source operating system for microcontrollers) using software libraries that make it easy for you to connect your small, low-power devices to AWS cloud services or to more powerful devices that run AWS IoT Greengrass, a software that helps extend the cloud capabilities to local devices. Amazon FreeRTOS With the helo of Amazon FreeRTOS, you can collect data from them for IoT applications. Earlier, it was only possible to configure devices to a local network using common connection options such as Wi-Fi, and Ethernet. But, now with the addition of the new BLE feature, you can securely build a connection between Amazon FreeRTOS devices that use BLE  to AWS IoT via Android and iOS devices. BLE support in Amazon FreeRTOS is currently available in beta. Amazon FreeRTOS is widely used in industrial applications, B2B solutions, or consumer products companies like the appliance, wearable technology, or smart lighting manufacturers. For more information, check out the official Amazon freeRTOS update post. FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018
Read more
  • 0
  • 0
  • 17459

article-image-aws-introduces-aws-datasync-for-automated-simplified-and-accelerated-data-transfer
Natasha Mathur
27 Nov 2018
3 min read
Save for later

AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer 

Natasha Mathur
27 Nov 2018
3 min read
The AWS team introduced AWS DataSync, an online data transfer service for automating data movement, yesterday. AWS DataSync offers data transfer from on-premises storage to Amazon S3 or Amazon Elastic File System (Amazon EFS) and vice versa. Let’s have a look at what’s new in AWS DataSync. Key Functionalities Move data 10x faster: AWS DataSync uses a purpose-built data transfer protocol along with a parallel, multi-threaded architecture that has the capability to run 10 times as fast as open source data transfer. This also speeds up the migration process and the recurring data processing workflows for analytics, machine learning, and data protection processes. Per-gigabyte fee: It is a managed service and you only need to pay the per-gigabyte fee which is paying only for the amount of data that you transfer. Other than that, there are no upfront costs and no minimum fees. DataSync Agent: The ‘AWS DataSync Agent’ is a crucial part of the service. It helps connect to your existing storage and the in-cloud service to automate, scale, and validate transfers. This, in turn, ensures that you don't have to write scripts, or modify the applications. Easy setup: It is very easy to set up and use (Console and CLI access is available). All you need to do is deploy the DataSync agent on-premises, then connect it to your file systems using the Network File System (NFS) protocol. After this, select Amazon EFS or S3 as your AWS storage, and you can start moving the data. Secure data transfer: AWS DataSync offers secure data transfer over the Internet or AWS Direct Connect. It also comes with automatic encryption and data. This, in turn, minimizes the in-house development and management which is needed for fast and secure transfers. Simplify and automate data transfer: With the help of AWS DataSync, you can perform one-time data migrations, transfer the on-premises data for timely in-cloud analysis, and automate the replication to AWS to ensure data protection and recovery. AWS DataSync is available for use from now in the US East, US West, Europe and Asia Pacific Regions. For more information, check out the official AWS DataSync blog post.  Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018  Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! 
Read more
  • 0
  • 0
  • 12894
article-image-aws-reinvent-2018-amazon-announces-a-variety-of-aws-iot-releases
Prasad Ramesh
27 Nov 2018
4 min read
Save for later

AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases

Prasad Ramesh
27 Nov 2018
4 min read
At the AWS re:Invent 2018 event yesterday, Amazon announced a variety of IoT related AWS releases. Three new AWS IoT Service Delivery Designations at AWS re:Invent 2018 The AWS Service Delivery Program helps customers find and select top APN Partners who have a track record of delivering specific AWS services. The APN partners undergo a service delivery expertise related technical validation in order to get an AWS Service Delivery designation. Three new AWS IoT Service Delivery Designations are now added—AWS IoT Core, AWS IoT Greengrass, and AWS IoT Analytics. AWS IoT Things Graph AWS IoT Things Graph provides an easy way for developers to connect different devices and web services in order to build IoT applications. Devices and web services are represented as reusable components called models. These models hide the low-level details and expose states, actions, and events of underlying devices and services as APIs. A drag-and-drop interface is available to connect the models visually and define interactions between them. This can build multi-step automation applications. When built, the application to your AWS IoT Greengrass-enabled device can be deployed with a few clicks. Areas which it can be used are home automation, industrial automation, and energy management. AWS IoT Greengrass has extended functionality AWS IoT Greengrass allows bringing abilities like local compute, messaging, data caching, sync, and ML inference to edge devices. New features that extend the capabilities of AWS IoT Greengrass including can be used now: Connectors to third-party applications and AWS services. Hardware root of trust private key storage. Isolation and permission configurations that increase the AWS IoT Greengrass Core configuration options. The connectors allow you to easily build complex workflows on AWS IoT Greengrass even if you have no understanding of device protocols, managing credentials, or interacting with external APIs. Connections can be made without writing code. The security is increased due to hardware root of trust private key storage on hardware secure elements. This includes Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). By storing your private key on a hardware secure element, a hardware root of trust level security is added to existing AWS IoT Greengrass security features that include X.509 certificates. This enables mutual TLS authentication and encryption of data regardless if they are transit or at rest. The hardware secure element can be used to protect secrets that were deployed to AWS IoT Greengrass device. New configuration options allow deploying AWS IoT Greengrass to another container environment and directly access low-power devices like Bluetooth Low Energy (BLE) devices. AWS IoT SiteWise, available in limited preview AWS IoT SiteWise is a new service that simplifies collecting and organizing data from industrial equipment at scale. With this service, you can easily monitor equipment across industrial facilities to identify waste, production inefficiencies, and defects in products. With IoT SiteWise, industrial data is stored securely, is available, and searchable in the cloud. IoT SiteWise can be integrated with industrial equipment via a gateway. The gateway then securely connects on-premises data servers to collect data and send it to the AWS Cloud. AWS IoT SiteWise can be used in areas of manufacturing, food and beverage, energy, and utilities. AWS IoT Events, available in preview AWS IoT Events is a new IoT service that makes it easy to catch and respond to events from IoT sensors and applications. This service recognizes events across multiple sensors in order to identify operational issues like equipment slowdowns. It triggers alerts to notify support teams of an issue. This service offers a managed complex event detection service on the AWS cloud. Detecting events across thousands of IoT sensors, like temperature, humidity is simple. System-wide event detection and responding with appropriate actions is easy and cost-effective with AWS IoT Events. Potential areas of use include manufacturing, oil, and gas, commercial and consumer products. Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018 Data science announcements at Amazon re:invent 2017
Read more
  • 0
  • 0
  • 14064

article-image-amnesty-international-takes-on-google-over-chinese-censored-search-engine-project-dragonfly
Richard Gall
27 Nov 2018
3 min read
Save for later

Amnesty International takes on Google over Chinese censored search engine, Project Dragonfly

Richard Gall
27 Nov 2018
3 min read
Google has come under continued criticism for its censored Chinese search engine since it was revealed earlier this year.  But the Project - named Project Dragonfly - is today facing a day of action from human rights organization Amnesty International. Saying that "the app would set a dangerous precedent for tech companies enabling rights abuses by governments," Amnesty yesterday launched a petition opposing the project, and will be coordinating protests outside Google offices around the world. Although Google has faced tough criticism - not least from within the organization itself - Amnesty International's focus on the company represents another major challenge for Google to contend with as it ends a tough 2018. Arguably, Amnesty has shifted the complexion of the issue. It has forced it to become a question of human rights, not just of business priorities and practical compromises. What does Amnesty say about Google's Project Dragonfly? As you can imagine, Amnesty International is unequivocal in its condemnation of the censored search engine. Joe Westby, Amnesty International’s Researcher on Technology and Human Rights, said "this is a watershed moment for Google. As the world’s number one search engine, it should be fighting for an internet where information is freely accessible to everyone, not backing the Chinese government’s dystopian alternative." One of Amnesty's biggest fears is that Project Dragonfly could set a precedent. There's a chance it could make it acceptable for tech companies to cooperate with nations with poor records on human rights. Westby argued that "if Google is happy to capitulate to the Chinese government’s draconian rules on censorship, what’s to stop it cooperating with other repressive governments who control the flow of information and keep tabs on their citizens?" What is Amnesty International doing to protest? Amnesty International has put together a plan to raise awareness of Project Dragonfly, in a bid to gain more support from Google employees and, indeed, the wider public. Alongside the petition and planned protests, Amnesty also put together a satirical Google recruitment video. If you want to work for Google on the project, you need "great coding skills, five years’ experience, and absolutely no morals." https://www.youtube.com/watch?v=c6nU42tqXvA&fe= How has Google responded to Amnesty International? Google hasn't, at the time of publication, responded to any requests for comment. However, CEO Sundar Pichai has always defended Project Dragonfly from criticism, saying that with China accounting for more than 20% of the world's population, Google is "compelled" to continue on its mission to help spread information to everyone around the world, regardless of who or where they are. He has also been keen to stress that Project Dragonfly is only an experiment, and has failed to commit to timelines for launching the search engine. It would appear that Google is still testing the waters and seeing if it can find a PR line it thinks employees and the general public will be happy with.
Read more
  • 0
  • 0
  • 13818
Modal Close icon
Modal Close icon