Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7007 Articles
article-image-katie-bouman-unveils-the-first-ever-black-hole-image-with-her-brilliant-algorithm
Amrata Joshi
11 Apr 2019
11 min read
Save for later

Katie Bouman unveils the first ever black hole image with her brilliant algorithm

Amrata Joshi
11 Apr 2019
11 min read
Remember how we got to see the supermassive black hole in the movie Interstellar? Well, that wasn’t for real. We know that black holes end up sucking everything that’s too close to it, even light for that matter. Black hole’s event horizon cast a shadow and that shadow is enough for answering a lot of questions attached to black hole theory. And scientists and researchers have been working towards it since years to get that one image to give an angle to their research. And finally comes the biggest news that a team of astronomers, engineers, researchers and scientists have managed to capture the first ever image of a black hole, which is located in a distant galaxy. It is three million times the size of the Earth and it measures 40 billion Km across. The team describes it as "a monster" and was photographed by a network of eight telescopes across the world. In this article, we give you a glimpse of how did the image of the black hole got captured? Katie Bouman, a PhD student at MIT appeared at TED Talks and discussed the efforts taken by the team of researchers, engineers, astronomers and scientists to capture the first ever image of the black hole. Katie is a part of an international team of astronomers who worked for creating the world’s largest telescope, Event Horizon Telescope to click the first ever picture of the black hole. She led the development of a computer programme that made this impossible, possible! She started working on the algorithm three years ago while she was a graduate student. https://twitter.com/jenzhuscott/status/1115987618464968705 Katie wrote in the caption to one of the Facebook post, "Watching in disbelief as the first image I ever made of a black hole was in the process of being reconstructed." https://twitter.com/MIT_CSAIL/status/1116035007406116864 Further, she explains how the stars we see in the sky basically orbit an invisible object. And according to the astronomers, the only thing that can cause this motion of the stars is a supermassive black hole. Zooming in at radio wavelengths to see a ring of light “Well, it turns out that if we were to zoom in at radio wavelengths, we'd expect to see a ring of light caused by the gravitational lensing of hot plasma zipping around the black hole. Is it possible to see something that, by definition, is impossible to see? ” -Katie Bouman If we closely look at it, we can see that the black hole casts a shadow on the backdrop of bright material that carves out a sphere of darkness. It is a bright ring that reveals the black hole's event horizon, where the gravitational pull becomes so powerful that even light can’t escape. Einstein's equations have predicted the size and shape of this ring and taking a picture of it would help to verify that these equations hold in the extreme conditions around the black hole. Capturing black hole needs a telescope the size of the Earth “So how big of a telescope do we need in order to see an orange on the surface of the moon and, by extension, our black hole? Well, it turns out that by crunching the numbers, you can easily calculate that we would need a telescope the size of the entire Earth.” -Katie Bouman Bouman further explains that black hole is so far away from Earth that this ring appears incredibly small, as small as an orange on the surface of the moon. And this makes it difficult to capture the photo of the black hole. There are fundamental limits to the smallest objects that we can see because of diffraction. So the astronomers realized that they need to make their telescope bigger and bigger. Even the most powerful optical telescopes couldn’t get close to the resolution necessary to image on the surface of the moon. She showed one of the highest resolution images ever taken of the moon from Earth to the audience which contained around 13,000 pixels, and each pixel contained over 1.5 million oranges. Capturing the black hole turned into reality by connecting telescopes “And so, my role in helping to take the first image of a black hole is to design algorithms that find the most reasonable image that also fits the telescope measurements.” -Katie Bouman According to Bouman, we would require a telescope as big as earth’s size to see an orange on the surface of the moon. Capturing a black hole seemed to be imaginary back then as it was nearly impossible to have a powerful telescope. Bouman highlighted the famous words of Mick Jagger, "You can't always get what you want, but if you try sometimes, you just might find you get what you need." Capturing the black hole turned into a reality by connecting telescopes from around the world. Event Horizon Telescope, an international collaboration created a computational telescope the size of the Earth which was capable of resolving structure on the scale of a black hole's event horizon. The setup was such that each telescope in the worldwide network worked together. The researcher teams at each of the sites collected thousands of terabytes of data. This data then processed in a lab in Massachusetts. Let’s understand this in depth by assuming that we can build an Earth sized telescope! Further imagining that Earth is a spinning disco ball and each of the mirror of the ball can collect light that can be combined together to form a picture. If most of those mirrors are removed then a few will remain. In this case, it is still possible to combine this information together, but now there will be a lot of holes. The remaining mirrors represent the locations where these telescopes have been setup. Though this seems like a small number of measurements to make a picture from but it is effective. The light gets collected at a few telescope locations but as the Earth rotates, other new measurements also get explored. So, as the disco ball spins, the mirrors change locations and the astronomers get to observe different parts of the image. The imaging algorithms developed by the experts, scientists and researchers fill in the missing gaps of the disco ball in order to reconstruct the underlying black hole image. Katie Bouman said, “If we had telescopes located everywhere on the globe -- in other words, the entire disco ball -- this would be trivial. However, we only see a few samples, and for that reason, there are an infinite number of possible images that are perfectly consistent with our telescope measurements.” According to Bouman, not all the images are created equal. So some of those images look more like what the astronomers, scientists and researchers think of as images as compared to others. Bouman’s role in helping to take the first image of the black hole was to design the algorithms that find the most relevant or reasonable image that fits the telescope measurements. The imaging algorithms developed by Katie used the limited telescope data to guide the astronomers to a picture. With the help of these algorithms, it was possible to bring together the pieces of pictures from the sparse and noisy data. How was the algorithm used in creation of the black hole image “I'd like to encourage all of you to go out and help push the boundaries of science, even if it may at first seem as mysterious to you as a black hole.” -Katie Bouman There is an infinite number of possible images that perfectly explain the telescope measurements and the astronomers and researchers have to choose between them. This is possible by ranking the images based upon how likely they are to be the black hole image and further selecting the one that's most likely. Bouman explained it with the help of an example, “Let's say we were trying to make a model that told us how likely an image were to appear on Facebook. We'd probably want the model to say it's pretty unlikely that someone would post this noise image on the left, and pretty likely that someone would post a selfie like this one on the right. The image in the middle is blurry, so even though it's more likely we'd see it on Facebook compared to the noise image, it's probably less likely we'd see it compared to the selfie.” While talking about the images from the black hole, according to Katie it gets confusing for the astronomers and researchers as they have never seen a black hole before. She further explained how difficult it is to rely on any of the previous theories for these images. It is even difficult to completely rely on the images of the simulations for comparison. She said, “What is a likely black hole image, and what should we assume about the structure of black holes? We could try to use images from simulations we've done, like the image of the black hole from "Interstellar," but if we did this, it could cause some serious problems. What would happen if Einstein's theories didn't hold? We'd still want to reconstruct an accurate picture of what was going on. If we bake Einstein's equations too much into our algorithms, we'll just end up seeing what we expect to see. In other words, we want to leave the option open for there being a giant elephant at the center of our galaxy.” According to Bouman, different types of images have distinct features, so it is quite possible to identify the difference between black hole simulation images and images captured by the team. So the researchers need to let the algorithms know what images look like without imposing one type of image features. And this can be done by imposing the features of different kinds of images and then looking at how the image type we assumed affects the reconstruction of the final image. The researchers and astronomers become more confident about their image assumptions if the images' types produce a very similar-looking image. She said, “This is a little bit like giving the same description to three different sketch artists from all around the world. If they all produce a very similar-looking face, then we can start to become confident that they're not imposing their own cultural biases on the drawings.” It is possible to impose different image features by using pieces of existing images. So the astronomers and researchers took a large collection of images and broke them down into little image patches. And then they treated each image patch like piece of a puzzle. They use commonly seen puzzle pieces to piece together an image that also fits in their telescope measurements. She said, “Let's first start with black hole image simulation puzzle pieces. OK, this looks reasonable. This looks like what we expect a black hole to look like. But did we just get it because we just fed it little pieces of black hole simulation images?” If we take a set of puzzle pieces from everyday images, like the ones we take with our own personal camera then we get the same image from all different sets of puzzle pieces. And we then become more confident that the image assumptions made by us aren't biasing the final image. According to Bouman, another thing that can be done is take the same set of puzzle pieces like the ones derived from everyday images and then use them to reconstruct different kinds of source images. Bouman said, “So in our simulations, we pretend a black hole looks like astronomical non-black hole objects, as well as everyday images like the elephant in the center of our galaxy.” And when the results of the algorithms look very similar to the simulated image then researchers and astronomers become more confident about their algorithms. She emphasized that all of these pictures were created by piecing together little pieces of everyday photographs, like the ones we take with own personal camera. So an image of a black hole which we have never seen before can be created by piecing together pictures we see regularly like images of people, buildings, trees, cats and dogs. She concluded by appreciating the efforts taken by her team, “But of course, getting imaging ideas like this working would never have been possible without the amazing team of researchers that I have the privilege to work with. It still amazes me that although I began this project with no background in astrophysics. But big projects like the Event Horizon Telescope are successful due to all the interdisciplinary expertise different people bring to the table.” This project will surely encourage many researchers, engineers, astronomers and students who are under dark and not confident of themselves but have the potential to make the impossible, possible. https://twitter.com/fchollet/status/1116294486856851459 Is the YouTube algorithm’s promoting of #AlternativeFacts like Flat Earth having a real-world impact? YouTube disables all comments on videos featuring children in an attempt to curb predatory behavior and appease advertisers Using Genetic Algorithms for optimizing your models [Tutorial]  
Read more
  • 0
  • 0
  • 32218

article-image-4520-amazon-employees-sign-an-open-letter-asking-for-a-company-wide-plan-that-matches-the-scale-and-urgency-of-climate-crisis
Sugandha Lahoti
11 Apr 2019
7 min read
Save for later

4,520+ Amazon employees sign an open letter asking for a “company-wide plan that matches the scale and urgency of climate crisis”

Sugandha Lahoti
11 Apr 2019
7 min read
Over 4,520 Amazon employees and counting are organizing against Amazon's continued profiting from climate devastation. Yesterday, they signed an open letter addressed to Jeff Bezos and Amazon board of directors asking for a company-wide action plan to address climate change and an end to the company’s reliance on dirty energy resources. This coalition is one of the largest employee-led movement for climate change in the tech industry. https://twitter.com/AMZNforClimate/status/1116020593546018817 The letter states, “Climate change is an existential threat. Amazon’s leadership is urgently needed. We’re a company that understands the importance of thinking big, taking ownership of hard problems, and earning trust. These traits have made Amazon a top global innovator but have been missing from the company’s approach to climate change. We believe this is a historic opportunity for Amazon to stand with employees and signal to the world that we’re ready to be a climate leader.” The Amazon Employees for Climate Justice group grew out of a shareholder resolution co-filed by 28 current and former employees in December. Previously reported by the New York Times, tech workers used the stock grants they receive as compensation to agitate a climate change. Per the official press release, Amazon employees had multiple meetings with the leadership, asking for a company-wide climate plan and for the Board to support the resolution. They received no agreement in response and were informed that the Board will be printing a statement of opposition in the shareholder ballot that will be released in the coming days. What started as a 28 worker congregation grew out to be a 3500 signed letter in less than 48 hours of its publishing and about 4500+ at the time of writing. Climate change is real, and we need a plan The letter asks Amazon to release a company-wide climate plan inline with these principles: “Public goals and timelines to reduce emissions that are consistent with science and the 2018 Intergovernmental Panel on Climate Change (IPCC) report. A complete transition off of fossil fuels rather than relying on carbon offsets. Prioritizing climate impacts in business decisions, including ending partnerships with fossil fuel companies that accelerate oil and gas exploration and extraction. Reducing harm caused by Amazon operations to vulnerable communities first. Advocacy for local, federal, and international policies to reduce carbon emissions and withholding support from policymakers who delay action on climate change. Fair treatment of all employees during extreme weather events linked to climate change.” “Tech workers know the world is facing a climate emergency that is causing devastation to communities around the world, and vulnerable communities least responsible for the climate crisis are already paying the highest price,” said Emily Cunningham, a User Experience Designer who co-filed the resolution and signed the letter. “We have a responsibility, as one of the largest companies in the world, to account for the sizeable contributions we are making toward accelerating climate change.” Amazon is failing in its plans to Go Green In 2014, following suite Apple, Facebook, and Google, Amazon announced that it would power data centers with 100 percent renewable energy. However, since 2018 Amazon has reportedly slowed down its efforts to use renewable energy using only 50 percent. It has also not announced any new deals to supply clean energy to its data centers since 2016, according to a report by Greenpeace, and it quietly abandoned plans for one of its last scheduled wind farms last year. AWS for Oil & Gas initiative The main issue central to the letter is the demand to end Amazon Web Services initiative that is building custom solutions to help fossil fuel companies accelerate oil and gas discovery and extraction. Per an investigation by Gizmodo, the company has built partnerships with clients in the oil and gas industry such as BP, Shell, and Halliburton offering their machine learning and data services “for enhanced exploration, IoT-enabled oilfield automation, and remote site data transportation.” This is ironical! A company which is marketing plans to go full on renewable energy has an AWS for Oil & Gas initiative devoted to helping fossil fuel companies accelerate and expand oil and gas extraction--harvest non-clean energy resources. In it’s February 2019 report, Greenpeace wrote, “Despite Amazon’s public commitment to renewable energy, the world’s largest cloud computing company is hoping no one will notice that it’s still powering its corner of the internet with dirty energy,” “Partnering with fossil fuel companies demonstrates that climate is not a priority for Amazon leadership,” said Jamie Kowalski, a Software Developer who signed the letter. "The science is clear: we must keep fossil fuels in the ground to avert catastrophic warming. How can we say we care about the climate when we're accelerating extractive processes that deliberately ignore the reality of the threat we face?" Shipment Zero In February 2018, Amazon announced the Shipping Zero initiative, through which Amazon says it aims to make 50 percent of its shipments carbon neutral by 2030. However, Shipment Zero only commits to net carbon reductions. Recently Amazon ordered 20,000 diesel vans whose emissions will need to be offset with carbon credits. Offsets can entail forest management policies that displace indigenous communities, and they do nothing to reduce diesel pollution which disproportionately harms communities of color. Some in the industry expressed disappointment that Amazon's order is for 20,000 diesel vans — not a single electric vehicle. “Amazon’s Shipment Zero announcement is the first step, and it showed the positive impact that employee pressure can have,” said Maren Costa, a Principal User Experience Designer who co-filed the resolution. “We all—individuals, corporations, governments—simply need to do more. Amazon needs a company-wide plan that matches the scale and urgency of the climate crisis, and Shipment Zero is not nearly enough. That’s why we’re asking all Amazon workers to join us by signing our letter to Jeff Bezos and the Board.” Amazon also says it will also be disclosing its carbon footprint sometime this year. However, it will not be submitting its disclosure to the nonprofit Carbon Disclosure Project for independent verification; Amazon says it is “developing its own approach to tracking and reporting carbon emissions,” according to CNBC. The letter also called out Amazon’s donation to 68 members of Congress in 2018 who voted against climate legislation 100% of the time. A lot of other rights groups have spoken publicly on their solidarity with Amazon employees. Google Walkout coalition group which protests sexual harassment, misconduct, lack of transparency and a biased workplace at Google also showed their support. https://twitter.com/GoogleWalkout/status/1116034151021449216 Microsoft Workers 4 good also appreciated the stand taken by Amazon employees and called for all employees to encourage their employers to take actions for climate change https://twitter.com/MsWorkers4/status/1116027540257132544 In a statement to Verge, an Amazon spokesperson highlighted company initiatives, like work to reduce the carbon footprint of shipments, and described Amazon’s commitment to environmental issues as “unwavering.” “Amazon’s sustainability team is using a science-based approach to develop data and strategies to ensure a rigorous approach to our sustainability work,” the spokesperson said. “We have launched several major and impactful programs and are working hard to integrate this approach fully across Amazon.” If you work at Amazon and want to sign the letter, Email amazonemployeesclimatejustice@gmail.com from your Amazon work email with the subject line “signature”. In the body of the email, include your name and job title as you’d like it to appear on the list of signatories Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley Google workers demand fair treatment for contractors; company rolls out mandatory benefits, in response, to improve working conditions. Microsoft workers protest the lethal use of Hololens2 in the $480m deal with US military.
Read more
  • 0
  • 0
  • 27999

article-image-what-can-blockchain-developers-learn-from-eclipse-attacks-in-a-bitcoin-network-koshik-raj
Packt Editorial Staff
11 Apr 2019
11 min read
Save for later

What can Blockchain developers learn from Eclipse Attacks in a Bitcoin network - Koshik Raj

Packt Editorial Staff
11 Apr 2019
11 min read
Networking attacks in blockchain are mostly ignored due to the difficulty involved in defeating a decentralized network that works using the peer-to-peer protocol. This doesn't mean that attacks on peer-to-peer networks are impossible. In this article, we'll be discussing one of the network attacks known as eclipse attacks. This article is an excerpt from the book, Foundations of Blockchain by Koshik Raj. Blockchain technology is a combination of three popular concepts: cryptography, peer-to-peer networking, and game theory. Foundations of Blockchain is for anyone who wants to dive into blockchain from first principles and learn how decentralized applications and cryptocurrencies really work. In an eclipse attack, the attacker eclipses the node from the network. The attacker makes sure that the node will not communicate with the blockchain network. The node will believe in a completely different truth than the rest of the network after the node is compromised by the attack. Generally, eclipse attacks are performed on high-profile blockchain nodes such as miners or merchants. The eclipse attack was proposed by computer security researchers Ethan Heilman, Alison Kendler, Aviv Zohar, and Sharon Goldberg in 2015. They published a Usenix Security paper titled Eclipse Attacks on Bitcoin's Peer-to-Peer Network. The paper explains the possibility of an attack on Bitcoin's peer-to-peer network. Although the attack is mainly focused on Bitcoin, it can be performed on the peer-to-peer networks of another blockchain platform as well. Another paper, titled Low-Resource Eclipse Attacks on Ethereum's Peer-to-Peer Network, which was published in 2018, analyzed the feasibility of an eclipse attack in the Ethereum network. We will look into the details of eclipse attacks based on the first of these papers. In a blockchain network, peers use a gossip protocol to set up an initial connection and exchange information. Each node learns about the peers in the network from the connected nodes. In an eclipse attack, the attacker prevents the victim from learning about the rest of the network by not gossiping about the other nodes. The attacker node is directly connected to the victim node, as shown in Figure 1. The attack looks similar to the man-in-the-middle attack performed between the client and the server in a centralized network. We will assume that attack takes place in Bitcoin's Proof of Work ecosystem to understand and analyze the eclipse attack in the coming sections. Figure.1: The position of the attacker in an eclipse attack Eclipsing the node A Bitcoin node can have a maximum of 8 outgoing and 117 incoming connections. Since there's a limit on the number of outgoing connections, the attacker can force the victim to solely establish connections to malicious nodes created by the attacker: Figure.2: Bitcoin node with outgoing and incoming connections in a peer-to-peer network This may look easy in theory; however, forcing the victim to only create connections to malicious nodes requires more than a single step attack. The attacker has to learn and manipulate the victim's connection information to manipulate the user's outgoing connections. Bitcoin nodes store outgoing connection information in a peer table. The peer table is organized into buckets of addresses. Filling these buckets with the attacker's IP addresses is the idea behind the attack. An attacker will use several vulnerabilities in Bitcoin Core to achieve this. Once the peer table is filled with the attacker's node information, the victim will only attempt to connect to the attacker's nodes after the node has been rebooted. Implications and analysis of the Eclipse attack Blockchain applications revolve around the principle of decentralization. Decentralization is achieved with the help of equally responsible nodes that are connected to form a peer-to-peer network. It's essential for most of the blockchain nodes (if not all) to exhibit similar functionality to achieve a purely decentralized network. This can be a challenging task because there is no authority in the public network to enforce rigid rules on the functionality of nodes. Many blockchain networks are being forced to centralize to improve the performance or integrate with the existing centralized entities. This exposes decentralized systems to potential issues that are already faced by existing centralized systems. We'll discuss some of the entities that have caused centralization and exposed decentralized networks to some of the potential threats. To gain more insight into the security issues that exchanges have been the victims of attacks due to centralization, let's go over three high profile examples. In August 2016, around 1,20,000 bitcoins were stolen from Bitfinex wallets. Bitfinex reduced the bitcoin funds of all their customers by 36%, including the customers whose wallets were not compromised. Newly minted BFX tokens were deposited in customers' accounts in proportion to their losses. Since these tokens did not have any intrinsic value in any other exchanges, they promised to buy back these distributed tokens eventually. Bitcoin has two different set of buckets that store peer information: a set of new buckets and a set of tried buckets. New buckets consist of addresses of newly available peers, whereas tried buckets store addresses of already-connected peers. When a node first connects to a peer, it adds the peer's information, along with a timestamp, to the tried bucket. The connected peer passes known peer information to the node, and the node stores this in the new bucket. When the node connects to the attacker's device, it will send information about the malicious peers so that the node stores those addresses in the new bucket. When a new connection is successfully made by the node, it will add the IP address to one of the 256 tried buckets. It randomly selects a single bucket but randomizes the selection based on the network ID and the full IP address. This is also the same in the case of adding IP addresses to the new bucket. Various vulnerabilities of the Bitcoin node can be exploited to make sure that most of the addresses in the bucket are the attacker's addresses. Several vulnerabilities of the Bitcoin node are pointed out in Vulnerabilities and countermeasures section. Since the eclipse attack is performed on the network layer, it can break the security of the consensus layer too. Any attack on the consensus layer can be more effective when the node's peer-to-peer protocol is compromised. A 51 percent attack without the attacker owning the majority of the computing power, or the double-spend attack even after several block confirmations, can both be performed when an eclipse attack is performed. An attacker can double spend a transaction even after n-confirmation simply by eclipsing a fraction of miners and the victim node. The attacker can spend a fund and forward it to the eclipsed miner. When the miner includes this in a block, the attacker shows this blockchain to the victim node. The victim is convinced after looking at the confirmed transaction. The attacker also forwards a transaction to double spend the same fund. When the attacker completes their purchase from the victim, they reveal the actual blockchain to both the eclipsed miner and the victim, thus making their blockchain obsolete. A double-spend attack is performed in Figure 3. The attacker eclipses a miner who controls 30 percent of the mining power and the victim. The attacker spends a fund and sends the transaction to the eclipsed miner. The eclipsed victim only views this version of the blockchain. The attacker then spends the same fund and creates a transaction that is viewed by the rest of the network. Since this network controls the majority (70 percent) of the mining power, it will create a longer blockchain, making the eclipsed miner's blockchain obsolete: Figure.3: Double-spend attack by eclipsing the victim node If an attacker is a miner, he can launch a 51 percent attack without owning 51 percent of the computing power of the network. This can be achieved by preventing the honest miners from controlling the majority of the computing power. The attacker can eclipse a few miners from rest of the network, which would prevent miners from building blocks on each other's created blocks. This will prevent honest miners from owning the majority of the power to create blocks. This will increase the chances of an attacker with less than 51 percent of the mining power launching a 51 percent attack. Figure 4 shows that an attacker with 40 percent of the mining power eclipses two miners, each controlling only 30 percent of the mining power in the network. Now that attacker owns the majority of the mining power; they have a better chance of ending up with a longer chain than the other miners who are isolated from each other. Each miner who is unaware of the rest of the network will keep building their own version of the blockchain. The attacker can publish their blockchain to the network at any time, making other versions of the blockchain obsolete. Figure.4: 51 percent attack with less than 50 percent of mining power Although the eclipse attack may seem unrealistic, it isn't actually. A clever attack with the help of botnets can easily compromise a node that doesn't implement an additional layer of network security. The published paper Eclipse Attacks on Bitcoin's Peer-to-Peer Network explains the chances of an eclipse attack occurring with different scenarios. An experiment performed with botnets produced the following results: A worst-case scenario was created by filling tried bucket slots with addresses of honest nodes. An attack was performed with a total of 4,600 IP addresses for a period of 5 hours. Although the tried bucket slots were initially mostly filled with the addresses of honest nodes, 98.8 percent of them were replaced with the attacker's addresses after the attack. The attack had a 100 percent success rate. An attack was performed on live Bitcoin nodes that had only 7 percent of the tried address slots filled with legitimate addresses. The attack was simulated by attacking with 400 IP addresses and only 1 hour invested in the attack. The tried table was filled with around 57 percent of attacker addresses after the attack. This attack had a success rate of 84 percent. Vulnerabilities and countermeasures The attacker has to exploit a few vulnerabilities to replace legitimate peer addresses with their own addresses. Some of the vulnerabilities in Bitcoin nodes that can be exploited are as follows: The node selects the IP addresses from the tried bucket with recent time stamps, which increases the probability of the attacker getting selected even if the attacker owns a small portion of the tried bucket addresses. The attacker can increase the chances by increasing the attack time. Whenever an address bucket is filled, one of the addresses is removed randomly. Since the removed address is random, if an attacker's IP is removed from the bucket, it can be eventually inserted by repeatedly sending it to the node. The attacker can exploit these mentioned vulnerabilities. However, these vulnerabilities can be avoided by altering the behavior of the Bitcoin node while gossiping with the peers: Selection of the IP address from the tried table could be randomized, which would reduce the chances of selecting an attacker peer even if it was recently connected. The attacker will not be successful even after investing a lot of time in the attack if peer selection is randomized. If a deterministic approach is used to insert the address of the peer into a fixed slot, it will reduce the chances of inserting the attacker's address to a different slot after it is evicted from the bucket. Deterministic insertion will ensure that repeated insertion of addresses will not add any value to an attack. Most of the vulnerabilities in Bitcoin have been fixed. But due to the public blockchain networks and open source culture followed by most blockchain-based organizations, attackers will quickly find vulnerabilities. In this article, the theory behind eclipse attacks was discussed. The various ways in which it could compromise Bitcoin networks were, the vulnerabilities and the countermeasures to alleviate bitcoin network are listed. Learn basic blockchain concepts and algorithms in Python from our latest book Foundations of Blockchain written by Koshik Raj. Koshik Raj is an information security enthusiast who holds a master's degree in computer science and information security. He has a background of working with RSA, a network security company. He has also worked as a senior developer in CoWrks, Bengaluru. Understanding the cost of a cybersecurity attack: The losses organizations face Knowing the threat actors behind a cyber attack 200+ Bitcoins stolen from Electrum wallet in an ongoing phishing attack
Read more
  • 0
  • 0
  • 17302

article-image-stack-overflow-survey-data-further-confirms-pythons-popularity-as-it-moves-above-java-in-the-most-used-programming-language-list
Richard Gall
10 Apr 2019
5 min read
Save for later

Stack Overflow survey data further confirms Python's popularity as it moves above Java in the most used programming language list

Richard Gall
10 Apr 2019
5 min read
This year's Stack Overflow Developer Survey results provided a useful insight into how the programming language ecosystem is evolving. Perhaps the most remarkable - if unsurprising - insight was the continued and irresistible rise of Python. This year, for the first time, it finished higher in the rankings than Java. We probably don't need another sign that Python is taking over the world, but this is certainly another one to add to the collection.  What we already know about Python's popularity as a programming language Okay, so the Stack overflow survey results weren't that surprising because Python's growth is well-documented. The language has been shooting up the TIOBE rankings, coming third for the first time back in September 2018. The most recent ranking has seen it slip to fourth (C++ is making a resurgence - but that's a story for another time...), but it isn't in decline - it's still growing. In fact, despite moving back into fourth, it's still growing at the fastest rate of any programming language, with 2.36% growth in its rating. For comparison, C++'s rate of growth in the rankings is 1.62%. But it's not just about TIOBE rankings. Even back in September 2017 the Stack Overflow team were well aware of Python's particularly astonishing growth in high-income countries. Read next: 8 programming languages to learn in 2019 Python's growth in the Stack Overflow survey since 2013 It has been pretty easy to trace the growth in the use of Python through the results of every recent Stack Overflow survey. From 2016, it has consistently been on the up: 2013: 21.9% (6th position in the rankings) 2014: 23.4% (again, 6th position in the rankings) 2015: 23.8% (6th) 2016: 24.9% (6th) 2017: 32% (moving up to 5th...) 2018: 38.8% (down to 7th but with a big percentage increase) 2019: 41.7% (4th position) But more interestingly, it would seem that this growth in usage has been driving demand for it. Let's take a look at how things have changed in the 'most wanted' programming language since 2015 - this is the "percentage of developers who are not developing with the language or technology but have expressed interest in developing with it:"  2015: 14.8% (3rd) 2016: 13.3% (4th) 2017: 20.6% (1st) 2018: 25.1% (1st) 2019: 25.7% (1st) Alongside that, it's also worth considering just how well-loved Python is. A big part of this is probably the fact that Python is so effective for the people using it, and helps them solve the problems they want to solve. Those percentages are growing, even though it didn't take top position this year (this is described by Stack Overflow as the "percentage of developers who are developing with the language or technology and have expressed interest in continuing to develop with it"): 2015: 66.6% (10th position) 2016: 62.5% (9th) 2017: 62.7% (6th) 2018: 68% (3rd) 2019: 73.1% (2nd, this time pipped by Rust to the top spot) What's clear here is that Python has a really strong foothold both in the developer mind share (ie. developers believe it's something worth learning) and in terms of literal language use. Obviously, it's highly likely that both things are related - but whatever the reality, it's good to see that process happening in data from the last half a decade. Read next: 5 blog posts that could make you a better Python programmer What's driving the popularity of Python? The obvious question, then, is why Python is growing so quickly. There are plenty of theories out there, and there are certainly plenty of blog posts on the topic. But ultimately, Python's popularity boils down to a few key things.  Python is a flexible language One of the key reasons for Python's growth is its flexibility. It isn't confined to a specific domain. This would go some way of explaining its growth - because it's not limited to a specific job role or task, a huge range of developers are finding uses for it. This has a knock on effect - because the community of users continues to grow, there is much more impetus on developing tools that can support and facilitate the use of Python in diverse domains. Indeed, with the exception of JavaScript, Python is a language that many  developers experience through its huge range of related tools and libraries.   The growth of data science and machine learning While Python isn't limited to a specific domain, the immense rise in interest in machine learning and data analytics has been integral to Python's popularity. With so much data available to organizations and their employees, Python is a language that allows them to actually leverage it. Read next: Why is Python so good for AI and Machine Learning? 5 Python Experts Explain Python's easy to learn The final key driver of Python's growth is the fact that it is relatively easy to learn. It's actually a pretty good place to begin if you're new to programming.  Going back to the first point, it's precisely because it's flexible that people that might not typically write code or see themselves as developers could see Python as a neat solution to a problem they're trying to solve. Because it's not a particularly steep learning curve, it introduces these people to the foundational elements of programming. Something which can only be a good thing, right? The future of Python It's easy to get excited about Python's growth, but what's particularly intriguing about it is what it can indicate about the wider software landscape. That's perhaps a whole new question, but from a burgeoning army of non-developer professionals powered by Python to every engineer wanting to unlock automation, it would appear that the growth of Python both a response and a symptom of significant changes.
Read more
  • 0
  • 0
  • 23652

article-image-google-cloud-next19-day-1-open-source-partnerships-hybrid-cloud-platform-cloud-run-and-more
Bhagyashree R
10 Apr 2019
6 min read
Save for later

Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more

Bhagyashree R
10 Apr 2019
6 min read
Google Cloud Next’ 19 kick started yesterday in San Francisco. On day 1 of the event, Google showcased its new tools for application developers, its partnership with open-source companies, and outlined its strategy to make a mark in the Cloud industry, which is currently dominated by Amazon and Microsoft. Here’s the rundown of the announcements Google made yesterday: Google Cloud’s new CEO is set to expand its sales team Cloud Next’19 is the first event where the newly-appointed Google Cloud CEO, Thomas Kurian took on stage to share his plans for Google Cloud. He plans to make Google Cloud "the best strategic partner" for organizations modernizing their IT infrastructure. To step up its game in the Cloud industry, Google needs to put more focus on understanding its customers, providing them better support, and making it easier for them to conduct business. This is why Kurian is planning to expand the sales team and add more technical specialists. Kurian, who joined Google after working at Oracle for 22 years, also shared that the team is rolling out new contracts to make contracting easier and also promised simplified pricing. Anthos, Google’s hybrid cloud platform is coming to AWS and Azure During the opening keynote, Sundar Pichai, Google’s CEO confirmed the rebranding of Cloud Services Platform, a platform for building and managing hybrid applications, as it enters general availability. This rebranded version named Anthos provides customers a single managed service, which is not limited to just Google-based environments and comes with extended support for Amazon Web Services (AWS) and Azure. With this extended support, Google aims to provide organizations that have multi-cloud sourcing strategy a more consistent experience across all three clouds. Urs Hölzle, Google’s Senior Vice President for Technical Infrastructure, shared in a press conference, “I can’t really stress how big a change that is in the industry, because this is really the stack for the next 20 years, meaning that it’s not really about the three different clouds that are all randomly different in small ways. This is the way that makes these three clouds — and actually on-premise environments, too — look the same.” Along with this extended support, another plus point of Anthos is that it is hardware agnostic, which means customers can run the service on top of their current hardware without having to immediately invest in new servers. It is a subscription-based service, with prices starting at $10,000/month per 100 vCPU block. Google also announced the first beta release of Anthos Migrate, a service that auto-migrates VMs from on-premises, or other clouds, directly into containers in Google Kubernetes Environment (GKE) with minimum effort. Explaining the advantage of this tool, Google wrote in a blog post, “Through this transformation, your IT team is free from managing infrastructure tasks like VM maintenance and OS patching, so it can focus on managing and developing applications.” Google Cloud partners with top open-source projects challenging AWS Google has partnered with several top open-source data management and analytics companies including Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4j and Redis Labs. The services and products provided by these companies will be deeply integrated into the Google Cloud Platform. With this integration, Google aims to provide customers a seamless experience by allowing them to use these open source technologies at a single place, Google Cloud. These will be managed services and the invoicing and billing of these services will be handled by Google Cloud. Customer support will also be the responsibility of Google so that users manage and log tickets across all of these services via a single platform. Google’s approach of partnering with these open source companies is quite different from that of other cloud providers. Over the past few years, we have come across cases where cloud providers sell open-source projects as service, often without giving any credits to the original project. This led to companies revisiting their open-source licenses to stop such behavior. For instance, Redis adopted the Common Clause license for its Redis Modules and later dropped its revised license in February. Similarly, MongoDB, Neo4j, and Confluent also embraced a similar strategy. Kurian said, “In order to sustain the company behind the open-source technology, they need a monetization vehicle. If the cloud provider attacks them and takes that away, then they are not viable and it deteriorates the open-source community.” Cloud Run for running stateless containers serverlessly Google has combined serverless computing and containerization into a single product called Cloud Run. Yesterday, Oren Teich, Director Product Management for Serverless, announced the beta release of Cloud Run and also explained how it works. Cloud Run is a managed compute platform for running stateless containers that can be invoked via HTTP requests. It is built on top of Knative, a Kubernetes-based platform for building, deploying, and managing serverless workloads. You get two options to choose from, either you can run your containers fully-managed with Cloud Run or in your Google Kubernetes Engine cluster with Cloud Run on GKE. Announcing the release of Cloud Run, Teich wrote in a blog post, “Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and it’s end-to-end managed.” Google releases closed source VS Code plugin Google announced the beta release of “Cloud Code for VS Code” as a closed source library. It allows you to extend the VS Code to bring the convenience of IDEs to developing cloud-native Kubernetes applications. This extension aims to speed up the builds, deployment, and debugging cycles. You can deploy your applications to either local clusters or across multiple cloud providers. Under the hood, Cloud Code for VS Code uses Google’s popular command-line tools such as skaffold and kubectl, to provide users continuous feedback as they build their projects. It also supports deployment profiles that lets you define different environments to make testing and debugging easier on your workstation or in the cloud. Cloud SQL now supports PostgreSQL 11.1 Beta Cloud SQL is Google’s fully-managed database service that makes it easier to set up, maintain, manage, and administer your relational databases on GCP. It now comes with support for PostgreSQL 11.1 Beta. Along with that, it supports the following relational databases: MySQL 5.5, 5.6, and 5.7 PostgreSQL 9.6 Google’s Cloud Healthcare API is now available in beta Ian Goodfellow quits Google and joins Apple as a director of machine learning Google Podcasts is transcribing full podcast episodes for improving search results  
Read more
  • 0
  • 0
  • 21072

article-image-uk-online-harms-white-paper-divides-internet-puts-tech-companies-government-crosshairs
Fatema Patrawala
10 Apr 2019
10 min read
Save for later

Online Safety vs Free Speech: UK’s "Online Harms" white paper divides the internet and puts tech companies in government crosshairs

Fatema Patrawala
10 Apr 2019
10 min read
The internet is an integral part of everyday life for so many people. It has definitely added a new dimension to the spaces of imagination in which we all live. But it seems the problems of the offline world have moved there, too. As the internet continues to grow and transform our lives, often for the better, we should not ignore the very real harms which people face online every day. And the lawmakers around the world are taking decisive action to make people safer online. On Monday, Europe drafted EU Regulation on preventing the dissemination of terrorist content online. Last week, the Australian parliament passed legislation to crack down on violent videos on social media. Recently Sen. Elizabeth Warren, US 2020 presidential hopeful proposed to build strong anti-trust laws and break big tech companies like Amazon, Google, Facebook and Apple. On 3rd April, Elizabeth introduced Corporate Executive Accountability Act, a new piece of legislation that would make it easier to criminally charge company executives when Americans’ personal data is breached. Last year, the German parliament enacted the NetzDG law, requiring large social media sites to remove posts that violate certain provisions of the German code, including broad prohibitions on “defamation of religion,” “hate speech,” and “insult.” And here’s yet another tech regulation announcement on Monday, a white paper on online harms was announced by the UK government. The Department for Digital, Culture, Media and Sport (DCMS) has proposed an independent watchdog that will write a "code of practice" for tech companies. According to Jeremy Wright, Secretary of State for Digital, Media & Sport and Sajid Javid, Home Secretary, “nearly nine in ten UK adults and 99% of 12 to 15 year olds are online. Two thirds of adults in the UK are concerned about content online, and close to half say they have seen hateful content in the past year. The tragic recent events in New Zealand show just how quickly horrific terrorist and extremist content can spread online.” Further they emphasized on not allowing such harmful behaviours and content to undermine the significant benefits that the digital revolution can offer. The white paper therefore puts forward ambitious plans for a new system of accountability and oversight for tech companies, moving far beyond self-regulation. It includes a new regulatory framework for online safety which will clarify companies’ responsibilities to keep UK users safer online with the most robust action to counter illegal content and activity. The paper suggests 3 major steps for tech regulation: establishing an independent regulator that can write a "code of practice" for social networks and internet companies giving the regulator enforcement powers including the ability to fine companies that break the rules considering additional enforcement powers such as the ability to fine company executives and force internet service providers to block sites that break the rules Outlining the proposals, Culture Secretary Jeremy Wright discussed the fine percentage with BBC UK, "If you look at the fines available to the Information Commissioner around the GDPR rules, that could be up to 4% of company's turnover... we think we should be looking at something comparable here." What are the kind of 'online harms' cited in the paper? The paper cover a range of issues that are clearly defined in law such as spreading terrorist content, child sex abuse, so-called revenge pornography, hate crimes, harassment and the sale of illegal goods. It also covers harmful behaviour that has a less clear legal definition such as cyber-bullying, trolling and the spread of fake news and disinformation. The paper cites that in 2018 online CSEA (Child Sexual Exploitation and Abuse) reported over 18.4 million referrals of child sexual abuse material by US tech companies to the National Center for Missing and Exploited Children (NCMEC). Out of those, there were 113, 948 UK-related referrals in 2018, up from 82,109 in 2017. In the third quarter of 2018, Facebook reported removing 8.7 million pieces of content globally for breaching policies on child nudity and sexual exploitation. Another type of online harm occurs when terrorists use online services to spread their vile propaganda and mobilise support. Paper emphasizes that terrorist content online threatens the UK’s national security and the safety of the public. Giving an example of the five terrorist attacks in the UK during 2017, had an online element. And online terrorist content remains a feature of contemporary radicalisation. It is seen across terrorist investigations, including cases where suspects have become very quickly radicalised to the point of planning attacks. This is partly as a result of the continued availability and deliberately attractive format of the terrorist material they are accessing online. Further it suggests that social networks must tackle material that advocates self-harm and suicide, which became a prominent issue after 14-year-old Molly Russell took her own life in 2017. After she died her family found distressing material about depression and suicide on her Instagram account. Molly's father Ian Russell holds the social media giant partly responsible for her death. Home Secretary Sajid Javid said tech giants and social media companies had a moral duty "to protect the young people they profit from". Despite our repeated calls to action, harmful and illegal content - including child abuse and terrorism - is still too readily available online.” What does the new proposal suggest to tackle online harm The paper calls for an independent regulator to hold internet companies to account. While it did not specify whether a new body will be established, or an existing one will be handed new powers. The regulator will define a "code of best practice" that social networks and internet companies must adhere to. It applies to tech companies like Facebook, Twitter and Google, and the rules would also apply to messaging services such as Whatsapp, Snapchat and cloud storage services. The regulator will have the power to fine companies and publish notices naming and shaming those that break the rules. The paper suggests it is also considering fines for individual company executives and making search engines remove links to offending websites and also consulting over blocking harmful websites. Another area discussed in the paper is about developing a culture of transparency, trust and accountability as a critical element of the new regulatory framework. The regulator will have the power to require annual transparency reports from companies in scope, outlining the prevalence of harmful content on their platforms and what measures they are taking to address this. These reports will be published online by the regulator, so that users can make informed decisions about online use. Additionally it suggests the spread of fake news could be tackled by forcing social networks to employ fact-checkers and promote legitimate news sources. How it plans to deploy technology as a part of solution The paper mentions that companies should invest in the development of safety technologies to reduce the burden on users to stay safe online. As in November 2018, the Home Secretary of UK co-hosted a hackathon with five major technology companies to develop a new tool to identify online grooming. So they have proposed this tool to be licensed for free to other companies, and plan more such innovative and collaborative efforts with them. The government also plans to work with the industry and civil society to develop a safety by design framework, linking up with existing legal obligations around data protection by design and secure by design principles. This will make it easier for startups and small businesses to embed safety during the development or update of products and services. They also plan to understand how AI can be best used to detect, measure and counter online harms, while ensuring its deployment remains safe and ethical. A new project led by Turing is setting out to address this issue. The ‘Hate Speech: Measures and Counter-measures’ project will use a mix of natural language processing techniques and qualitative analyses to create tools which identify and categorize different strengths and types of online hate speech. Other plans include launching of online safety apps which will combine state-of-the-art machine-learning technology to track children’s activity on their smartphone with the ability for children to self-report their emotional state. Why is the white paper receiving critical comments Though the paper seems to be a welcome step towards a sane internet regulation and looks sensible at the first glance. In some cases it has been regarded as too ambitious and unrealistically feeble. It reflects the conflicting political pressures under which it has been generated. TechUK, an umbrella group representing the UK's technology industry, said the government must be "clear about how trade-offs are balanced between harm prevention and fundamental rights". Jim Killock, executive director of Open Rights Group, said the government's proposals would "create state regulation of the speech of millions of British citizens". Matthew Lesh, head of research at free market think tank the Adam Smith Institute, went further saying "The government should be ashamed of themselves for leading the western world in internet censorship. The proposals are a historic attack on freedom of speech and the free press. At a time when Britain is criticising violations of freedom of expression in states like Iran, China and Russia, we should not be undermining our freedom at home." No one doubts the harm done by child sexual abuse or terrorist propaganda online, but these things are already illegal. The difficulty is its enforcement, which the white paper does nothing to address. Effective enforcement would demand a great deal of money and human time. The present system relies on a mixture of human reporting and algorithms. The algorithms can be fooled without too much trouble: 300,000 of the 1.5m copies of the Christchurch terrorist videos that were uploaded to Facebook within 24 hours of the crime were undetected by automated systems. Apart from this there is a criticism about the vision of the white paper which says it wants "A free, open and secure internet with freedom of expression online" "where companies take effective steps to keep their users safe". But it is actually not explained how it is going to protect free expression and seems to be a contradiction to the regulation. https://twitter.com/jimkillock/status/1115253155007205377 Beyond this, there is a conceptual problem. Much of the harm done on and by social media does not come from deliberate criminality, but from ordinary people released from the constraints of civility. It is here that the white paper fails most seriously. It talks about material – such as “intimidation, disinformation, the advocacy of self-harm” – that is harmful but not illegal yet proposes to regulate it in the same way as material which is both. Even leaving aside politically motivated disinformation, this is an area where much deeper and clearer thought is needed. https://twitter.com/guy_herbert/status/1115180765128667137 There is no doubt that some forms of disinformation do serious harms both to individuals and to society as a whole. And regulating the internet is necessary, but it won’t be easy or cheap. Too much of this white paper looks like an attempt to find cheap and easy solutions to really hard questions. Tech companies in EU to face strict regulation on Terrorist content: One hour take down limit; Upload filters and private Terms of Service Tech regulation to an extent of sentence jail: Australia’s ‘Sharing of Abhorrent Violent Material Bill’ to Warren’s ‘Corporate Executive Accountability Act’ How social media enabled and amplified the Christchurch terrorist attack  
Read more
  • 0
  • 0
  • 11307
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-2019-stack-overflow-survey-quick-overview
Sugandha Lahoti
10 Apr 2019
5 min read
Save for later

2019 Stack Overflow survey: A quick overview

Sugandha Lahoti
10 Apr 2019
5 min read
The results of the 2019 Stack Overflow survey have just been published: 90,000 developers took the 20-minute survey this year. The survey shed light on some very interesting insights – from the developers’ preferred language for programming, to the development platform they hate the most, to the blockers to developer productivity. As the survey is quite detailed and comprehensive, here’s a quick look at the most important takeaways. Key highlights from the Stack Overflow Survey Programming languages Python again emerged as the fastest-growing programming language, a close second behind Rust. Interestingly, Python and Typescript achieved the same votes with almost 73% respondents saying it was their most loved language. Python was the most voted language developers wanted to learn next and JavaScript remains the most used programming language. The most dreaded languages were VBA and Objective C. Source: Stack Overflow Frameworks and databases in the Stack Overflow survey Developers preferred using React.js and Vue.js web frameworks while dreaded Drupal and jQuery. Redis was voted as the most loved database and MongoDB as the most wanted database. MongoDB’s inclusion in the list is surprising considering its controversial Server Side Public License. Over the last few months, Red Hat dropped support for MongoDB over this license, so did GNU Health Federation. Both of these organizations choose PostgreSQL over MongoDB, which is one of the reasons probably why PostgreSQL was the second most loved and wanted database of Stack Overflow Survey 2019. Source: Stack Overflow It’s interesting to see WebAssembly making its way in the popular technology segment as well as one of the top paying technologies. Respondents who use Clojure, F#, Elixir, and Rust earned the highest salaries Stackoverflow also did a new segment this year called "Blockchain in the real world" which gives insight into the adoption of Blockchain. Most respondents (80%) on the survey said that their organizations are not using or implementing blockchain technology. Source: Stack Overflow Developer lifestyles and learning About 80% of our respondents say that they code as a hobby outside of work and over half of respondents had written their first line of code by the time they were sixteen, although this experience varies by country and by gender. For instance, women wrote their first code later than men and non-binary respondents wrote code earlier than men. About one-quarter of respondents are enrolled in a formal college or university program full-time or part-time. Of professional developers who studied at the university level, over 60% said they majored in computer science, computer engineering, or software engineering. DevOps specialists and site reliability engineers are among the highest paid, most experienced developers most satisfied with their jobs, and are looking for new jobs at the lowest levels. The survey also noted that developers who are system admins or DevOps specialists are 25-30 times more likely to be men than women. Chinese developers are the most optimistic about the future while developers in Western European countries like France and Germany are among the least optimistic. Developers also overwhelmingly believe that Elon Musk will be the most influential person in tech in 2019. With more than 30,000 people responding to a free text question asking them who they think will be the most influential person this year, an amazing 30% named Tesla CEO Musk. For perspective, Jeff Bezos was in second place, being named by ‘only’ 7.2% of respondents. Although, this year the US survey respondents proportion of women, went up from 9% to 11%, it’s still a slow growth and points to problems with inclusion in the tech industry in general and on Stack Overflow in particular. When thinking about blockers to productivity, different kinds of developers report different challenges. Men are more likely to say that being tasked with non-development work is a problem for them, while gender minority respondents are more likely to say that toxic work environments are a problem. Stack Overflow survey demographics and diversity challenges This report is based on a survey of 88,883 software developers from 179 countries around the world. It was conducted between January 23 to February 14 and the median time spent on the survey for qualified responses was 23.3 minutes. The majority of survey respondents this year were people who said they are professional developers or who code sometimes as part of their work, or are students preparing for such a career. Majority of them were from the US, India, China and Europe. Stack Overflow acknowledged that their results did not represent racial disparities evenly and people of color continue to be underrepresented among developers. This year nearly 71% of respondents continued to be of White or European descent, a slight improvement from last year (74%). The survey notes that, “In the United States this year, 22% of respondents are people of color; last year 19% of United States respondents were people of color.” This clearly signifies that a lot of work is still needed to be done particularly for people of color, women, and underrepresented groups. Although, last year in August, Stack Overflow revamped its Code of Conduct to include more virtues around kindness, collaboration, and mutual respect. It also updated  its developers salary calculator to include 8 new countries. Go through the full report to learn more about developer salaries, job priorities, career values, the best music to listen to while coding, and more. Developers believe Elon Musk will be the most influential person in tech in 2019, according to Stack Overflow survey results Creators of Python, Java, C#, and Perl discuss the evolution and future of programming language design at PuPPy Stack Overflow is looking for a new CEO as Joel Spolsky becomes Chairman
Read more
  • 0
  • 0
  • 29753

article-image-how-to-create-observables-in-rxjs-tutorial
Sugandha Lahoti
10 Apr 2019
7 min read
Save for later

How to create observables in RxJS [Tutorial]

Sugandha Lahoti
10 Apr 2019
7 min read
Reactive programming requires us to change the way that we think about events in an application. Reactive programming requires us to think about events as a stream of values. For example, a mouse click event can be represented as a stream of data. Every click event generates a new value in the data stream. In reactive programming, we can use the stream of data to query and manipulate the values in the stream. Observables are streams of data, and this explains why it is easy to imagine that we can represent an event such as an onClick event using an observable. However, the use cases for observables are much more diverse than that. In this article, we are going to explore how to create an observable given different types. This article is taken from the book Hands-On Functional Programming with TypeScript by Remo H. Jansen. In this book, you will discover the power of functional programming, lazy evaluation, monads, concurrency, and immutability to create succinct and expressive implementations. Creating observables from a value We can create an observable given a value using the of function. In the old versions of RxJS, the function of was a static method of the Observable class, which was available as Observable.of. This should remind us to use the of method of the Applicative type in category theory because observables take some inspiration from category theory. However, in RxJS 6.0, the of method is available as a standalone factory function: import { of } from "rxjs"; const observable = of(1); const subscription = observable.subscribe( (value) => console.log(value), (error: any) => console.log(error), () => console.log("Done!") ); subscription.unsubscribe(); The preceding code snippet declares an observable with one unique value using the of function. The code snippet also showcases how we can subscribe to an observable using the subscribe method. The subscribe method takes three function arguments: Item handler: Invoked once for each item in the sequence. Error handler: Invoked if there is an error in the sequence. This argument is optional. Done handler: Invoked when there are no more items in the sequence. This argument is optional. The following diagram is known as a marble diagram and is used to represent observables in a visual manner. The arrow represents the time and the circles are values. In this case, we have only one value: As we can see, the circle also has a small vertical line in the middle. This line is used to represent the last element in an observable. In this case, the item handler in the subscription will only be invoked once. Creating observables from arrays We can create an observable given an existing array using the from function: import { from } from "rxjs"; const observable = from([10, 20, 30]); const subscription = observable.subscribe( (value) => console.log(value), (error: any) => console.log(error), () => console.log("Done!") ); subscription.unsubscribe(); The preceding code snippet declares an observable with three values using the from function. The code snippet also showcases how we can subscribe once more. The following marble diagram represents the preceding example in a visual manner. The generated observable has three values (10, 20, and 30) and 30 is the last element in the observable: We can alternatively use the interval function to generate an array with a given number of elements: import { interval } from "rxjs"; const observable = interval(10); const subscription = observable.subscribe( (value) => console.log(value), (error: any) => console.log(error), () => console.log("Done!") ); subscription.unsubscribe(); The preceding code snippet declares an observable with ten values using the interval function. The code snippet also showcases how we can subscribe once more. In this case, the item handler in the subscription will be invoked ten times. The following marble diagram represents the preceding example in a visual manner. The generating observable has ten values, and 9 is the last item contained by it:  In this case, the item handler in the subscription will be invoked ten times. Creating observables from events It is also possible to create an observable using an event as the source of the items in the stream. We can do this using the fromEvent function: import { fromEvent } from "rxjs"; const observable = fromEvent(document, "click"); const subscription = observable.subscribe( (value) => console.log(value) ); subscription.unsubscribe(); In this case, the item handler in the subscription will be invoked as many times as the click event takes place. Please note that the preceding example can only be executed in a web browser. To execute the preceding code in a web browser, you will need to use a module bundler, such as Webpack. Creating observables from callbacks It is also possible to create an observable that will iterate the arguments of a callback using the bindCallback function: import { bindCallback } from "rxjs"; import fetch from "node-fetch"; function getJSON(url: string, cb: (response: unknown|null) => void) { fetch(url) .then(response => response.json()) .then(json => cb(json)) .catch(_ => cb(null)); } const uri = "https://jsonplaceholder.typicode.com/todos/1"; const observableFactory = bindCallback(getJSON); const observable = observableFactory(uri); const subscription = observable.subscribe( (value) => console.log(value) ); subscription.unsubscribe(); The preceding example uses the node-fetch module because the fetch function is not available in Node.js. You can install the node-fetch module using the following npm command: npm install node-fetch @types/node-fetch The getJSON function takes a URL and a callback as its arguments. When we pass it to the bindCallback function, a new function is returned. The new function takes a URL as its only argument and returns an observable instead of taking a callback. In Node.js, callbacks follow a well-defined pattern. The Node.js callbacks take two arguments, error and result, and don't throw exceptions. We must use the error argument to check whether something went wrong instead of a try/catch statement. RxJS also defines a function named bindNodeCallback that allows us to work with the callbacks: import { bindNodeCallback } from "rxjs"; import * as fs from "fs"; const observableFactory = bindNodeCallback(fs.readFile); const observable = observableFactory("./roadNames.txt"); const subscription = observable.subscribe( (value) => console.log(value.toString()) ); subscription.unsubscribe(); The helpers, bindCallback and bindNodeCallback, have very similar behavior, but the second has been specially designed to work with Node.js callbacks. Creating observables from promises Another potential source of items for an observable sequence is a Promise. RxJS also allows us to handle this use case with the from function. We must pass a Promise instance to the from function. In the following example, we use the fetch function to send an HTTP request. The fetch function returns a promise that is passed to the from function: import { bindCallback } from "rxjs"; import fetch from "node-fetch"; const uri = "https://jsonplaceholder.typicode.com/todos/1"; const observable = from(fetch(uri)).pipe(map(x => x.json())); const subscription = observable.subscribe( (value) => console.log(value.toString()) ); subscription.unsubscribe(); The generated observable will contain the result of the promise as its only item. Cold and hot observables The official RxJS documentation explores the differences between cold and hot observables as follows: "Cold observables start running upon subscription, that is, the observable sequence only starts pushing values to the observers when Subscribe is called. Values are also not shared among subscribers. This is different from hot observables, such as mouse move events or stock tickers, which are already producing values even before a subscription is active. When an observer subscribes to a hot observable sequence, it will get all values in the stream that are emitted after it subscribes. The hot observable sequence is shared among all subscribers, and each subscriber is pushed to the next value in the sequence." It is important to understand these differences if we want to have control over the execution flow of our components. The key point to remember is that cold observables are lazily evaluated. In this article, we learned what observables are and how we can create them and work with them. To know more about working with observables, and other aspects of functional programming, read our book Hands-On Functional Programming with TypeScript. What makes functional programming a viable choice for artificial intelligence projects? Why functional programming in Python matters: Interview with best selling author, Steven Lott Introducing Coconut for making functional programming in Python simpler
Read more
  • 0
  • 0
  • 13498

article-image-developers-believe-elon-musk-will-be-the-most-influential-person-in-tech-in-2019-according-to-stack-overflow-survey-results
Richard Gall
09 Apr 2019
4 min read
Save for later

Developers believe Elon Musk will be the most influential person in tech in 2019, according to Stack Overflow survey results

Richard Gall
09 Apr 2019
4 min read
According to the results of this year's Stack Overflow survey - published today - developers overwhelmingly believe that Elon Musk will be the most influential person in tech in 2019. With more than 30,000 people responding to a free text question asking them who they think will be the most influential person this year, an amazing 30% named Tesla CEO Musk. For perspective, Jeff Bezos was in second place, being named by 'only' 7.2% of respondents. Microsoft boss Satya Nadella in third with 4.4% of respondents listing him in response to the question. Why does everyone think Elon Musk is going to be so influential in tech in 2019? From one viewpoint, the fact that so many developers would list Musk at the top list of 2019's tech influencers seems remarkable. Yes, Tesla is hugely successful, but it could hardly be compared to Amazon which today is redefining the world in its own image (when you consider that so many applications are running on AWS, you can almost guarantee you've interacted with it today). Similarly, while SpaceX is unbelievably ambitious and certainly transformative in the way we think about space travel and exploration, it isn't a company having a direct impact on many of our day to day lives, even for those of us in the tech industry. Surely, you'd think, Microsoft's recent evolution which has seen it learning to stop worrying and love open source software, is one that makes Satya Nadella a particularly influential figure. Moreso, at least, than Elon Musk. But when you step back, Musk's drive - almost chaotic in the challenges and problems it chooses to take on - is undoubtedly influential in a way that can't be rivalled by anyone else in the technology industry. Read next: Elon Musk’s tiny submarine is a lesson in how not to solve problems in tech [caption id="attachment_27090" align="alignright" width="744"] via Stack Overflow[/caption] Elon Musk personifies the relentless id of innovation While Bezos might look like an evil genius with lethal business acumen, and Nadella a mild mannered visionary quietly transforming a company many people had almost stopped noticing, Musk moves between projects and problems with the reckless abandon of someone that can't help but try new things. He characterizes, for good or ill, Freudian id of many software professionals: easily bored yet relentlessly curious and interested in stuff. Okay, so money might be a big motivator - but you have to admit that there's a reason he didn't take the Bezos approach. Who else did developers say would be influential in 2019? Although Musk is the big headline here, there were some other interesting takeaways on this question in the Stack Overflow survey. For example, "Me/myself" came pretty high up in the list in fourth position - one above Donald Trump. It would be unfair to accuse respondents of arrogance. There's likely to be a trace of dry developer humor in this. Whatever the reality, it's good to see that the developer community isn't short of confidence. Interestingly, most of the top names are those at the top of the biggest tech companies - Zuckerberg, Cook, and Pichai were all made the top 10 of the survey's list. What about figures from the open source community? However, there were far fewer open source community personalities. Linus Torvalds was the highest ranking from this group (1.1%), with Dan Abramov, part of the React.js development team and co-creator of Redux also featuring high on the list (0.6%). Read next: 18 people in tech every programmer and software engineer needs to follow in 2019 Why aren't influential women being recognized? Very few women were named by respondents - only Lisa Su, CEO of AMD, featured in the top 25. As if it weren't clear enough, this signals the importance of ensuring that not only are more women supported to positions of influence inside the tech industry, but, moreover, that those that are are visible. It's also important to note that there is a significant gender imbalance in survey respondents - one that exceeds the overall imbalance in the industry. 91.7% of respondents to the survey identified as male, 7.9% as female, and 1.2% as non-binary, genderqueer, or gender non-conforming. This imbalance might explain the fact list is dominated by men. Indeed, you could even say that Stack Overflow has a big part to play in helping to make the tech industry more accessible - and supportive -  of women and nonbinary people.
Read more
  • 0
  • 0
  • 10599

article-image-the-eu-commission-introduces-guidelines-for-achieving-a-trustworthy-ai
Savia Lobo
09 Apr 2019
4 min read
Save for later

The EU commission introduces guidelines for achieving a ‘Trustworthy AI’

Savia Lobo
09 Apr 2019
4 min read
On the third day of the Digital Day 2019 held in Brussels, the European Commission introduced a set of essential guidelines for building a trustworthy AI, which will guide companies and government to build ethical AI applications. By introducing these new guidelines, the commission is working towards a three-step approach including, Setting out the key requirements for trustworthy AI Launching a large scale pilot phase for feedback from stakeholders Working on international consensus building for human-centric AI EU’s high-level expert group on AI, which consists of 52 independent experts representing academia, industry, and civil society, came up with seven requirements, which according to them, the future AI systems should meet. Seven guidelines for achieving an ethical AI Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. Robustness and safety: A trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them. Transparency: The traceability of AI systems should be ensured. Diversity, non-discrimination, and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. According to EU’s official press release, “Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps.” The plans fall under the Commission’s AI strategy of April 2018, which “aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust ”, the press release states. Andrus Ansip, Vice-President for the Digital Single Market, said, “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.” Mariya Gabriel, Commissioner for Digital Economy and Society, said, “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI." Thomas Metzinger, a Professor of Theoretical Philosophy at the University of Mainz and who was also a member of the commission's expert group that has worked on the guidelines has put forward an article titled, ‘Ethics washing made in Europe’. Metzinger said he has worked on the Ethics Guidelines for nine months. “The result is a compromise of which I am not proud, but which is nevertheless the best in the world on the subject. The United States and China have nothing comparable. How does it fit together?”, he writes. Eline Chivot, a senior policy analyst at the Center for Data Innovation think tank, told The Verge, “We are skeptical of the approach being taken, the idea that by creating a golden standard for ethical AI it will confirm the EU’s place in global AI development. To be a leader in ethical AI you first have to lead in AI itself.” To know more about this news in detail, read the EU press release. Is Google trying to ethics-wash its decisions with its new Advanced Tech External Advisory Council? IEEE Standards Association releases ethics guidelines for automation and intelligent systems Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018
Read more
  • 0
  • 0
  • 7009
article-image-creators-of-python-java-c-and-perl-discuss-the-evolution-and-future-of-programming-language-design-at-puppy
Bhagyashree R
08 Apr 2019
11 min read
Save for later

Creators of Python, Java, C#, and Perl discuss the evolution and future of programming language design at PuPPy

Bhagyashree R
08 Apr 2019
11 min read
At the first annual charity event conducted by Puget Sound Programming Python (PuPPy) last Tuesday, four legendary language creators came together to discuss the past and future of language design. This event was organized to raise funds for Computer Science for All (CSforALL), an organization which aims to make CS an integral part of the educational experience. Among the panelists were the creators of some of the most popular programming languages: Guido van Rossum, the creator of Python James Gosling, the founder, and lead designer behind the Java programming language Anders Hejlsberg, the original author of Turbo Pascal who has also worked on the development of C# and TypeScript Larry Wall, the creator of Perl The discussion was moderated by Carol Willing, who is currently a Steering Council member and developer for Project Jupyter. She is also a member of the inaugural Python Steering Council, a Python Software Foundation Fellow and former Director. Key principles of language design The first question thrown at the panelists was, “What are the principles of language design?” Guido van Rossum believes: [box type="shadow" align="" class="" width=""]Designing a programming language is very similar to the way JK Rowling writes her books, the Harry Potter series.[/box] When asked how he says JK Rowling is a genius in the way that some details that she mentioned in her first Harry Potter book ended up playing an important plot point in part six and seven. Explaining how this relates to language design he adds, “In language design often that's exactly how things go”. When designing a language we start with committing to certain details like the keywords we want to use, the style of coding we want to follow, etc. But, whatever we decide on we are stuck with them and in the future, we need to find new ways to use those details, just like Rowling. “The craft of designing a language is, in one hand, picking your initial set of choices so that's there are a lot of possible continuations of the story. The other half of the art of language design is going back to your story and inventing creative ways of continuing it in a way that you had not thought of,” he adds. When James Gosling was asked how Java came into existence and what were the design principles he abided by, he simply said, “it didn’t come out of like a personal passion project or something. It was actually from trying to build a prototype.” James Gosling and his team were working on a project that involved understanding the domain of embedded systems. For this, they spoke to a lot of developers who built software for embedded systems to know how their process works. This project had about a dozen people on it and Gosling was responsible for making things much easier from a programming language point of view. “It started out as kind of doing better C and then it got out of control that the rest of the project really ended up just providing the context”, he adds. In the end, the only thing out of that project survived was “Java”. It was basically designed to solve the problems of people who are living outside of data centers, people who are getting shredded by problems with networking, security, and reliability. Larry Wall calls himself a “linguist” rather than a computer scientist. He wanted to create a language that was more like a natural language. Explaining through an example, he said, “Instead of putting people in a university campus and deciding where they go we're just gonna see where people want to walk and then put shortcuts in all those places.” A basic principle behind creating Perl was to provide APIs to everything. It was aimed to be both a good text processing language linguistically but also a glue language. Wall further shares that in the 90s the language was stabilizing, but it did have some issues. So, in the year 2000, the Perl team basically decided to break everything and came up with a whole new set of design principles. And, based on these principles Perl was redesigned into Perl 6. Some of these principles were picking the right default, conserve your brackets because even Unicode does not have enough brackets, don't reinvent object orientation poorly, etc. He adds, [box type="shadow" align="" class="" width=""]“A great deal of the redesign was to say okay what is the right peg to hang everything on? Is it object-oriented? Is it something in the lexical scope or in the larger scope? What does the right peg to hang each piece of information on and if we don't have that peg how do we create it?”[/box] Anders Hejlsberg shares that he follows a common principle in all the languages he has worked on and that is “there's only one way to do a particular thing.” He believes that if a developer is provided with four different ways he may end up choosing the wrong path and realize it later in the development. According to Hejlsberg, this is why often developers end up creating something called “simplexity” which means taking something complex and wrapping a single wrapper on top it so that the complexity goes away. Similar to the views of Guido van Rossum, he further adds that any decision that you make when designing a language you have to live with it. When designing a language you need to be very careful about reasoning over what “not” to introduce in the language. Often, people will come to you with their suggestions for updates, but you cannot really change the nature of the programming language. Though you cannot really change the basic nature of a language, you can definitely extend it through extensions. You essentially have two options, either stay true to the nature of the language or you develop a new one. The type system of programming languages Guido van Rossum, when asked about the typing approach in Python, shared how it was when Python was first introduced. Earlier, int was not a class it was actually a little conversion function. If you wanted to convert a string to an integer you can do that with a built-in function. Later on, Guido realized that this was a mistake. “We had a bunch of those functions and we realized that we had made a mistake, we have given users classes that were different from the built-in object types.” That's where the Python team decided to reinvent the whole approach to types in Python and did a bunch of cleanups. So, they changed the function int into a designator for the class int. Now, calling the class means constructing an instance of the class. James Gosling shared that his focus has always been performance and one factor for improving performance is the type system. It is really useful for things like building optimizing compilers and doing ahead of time correctness checking. Having the type system also helps in cases where you are targeting small footprint devices. “To do that kind of compaction you need every kind of hope that it gives you, every last drop of information and, the earlier you know it, the better job you do,” he adds. Anders Hejlsberg looks at type systems as a tooling feature. Developers love their IDEs, they are accustomed to things like statement completion, refactoring, and code navigation. These features are enabled by the semantic knowledge of your code and this semantic knowledge is provided by a compiler with a type system. Hejlsberg believes that adding types can dramatically increase the productivity of developers, which is a counterintuitive thought. “We think that dynamic languages were easier to approach because you've got rid of the types which was a bother all the time. It turns out that you can actually be more productive by adding types if you do it in a non-intrusive manner and if you work hard on doing good type inference and so forth,” he adds. Talking about the type system in Perl, Wall started off by saying that Perl 5 and Perl 6 had very different type systems. In Perl 5, everything was treated as a string even if it is a number or a floating point. The team wanted to keep this feature in Perl 6 as part of the redesign, but they realized that “it's fine if the new user is confused about the interchangeability but it's not so good if the computer is confused about things.” For Perl 6, Wall and his team envisioned to make it a better object-oriented as well as a better functional programming language. To achieve this goal, it is important to have a very sound type system of a sound meta object model underneath. And, you also need to take the slogans like “everything is an object, everything is a closure” very seriously. What makes a programming language maintainable Guido van Rossum believes that to make a programming language maintainable it is important to hit the right balance between the flexible and disciplined approach. While dynamic typing is great for small programs, large programs require a much-disciplined approach. And, it is better if the language itself enables that discipline rather than giving you the full freedom of doing whatever you want. This is why Guido is planning to add a very similar technology like TypeScript to Python. He adds: [box type="shadow" align="" class="" width=""]“TypeScript is actually incredibly useful and so we're adding a very similar idea to Python. We are adding it in a slightly different way because we have a different context."[/box] Along with type system, refactoring engines can also prove to be very helpful. It will make it easier to perform large scale refactorings like millions of lines of code at once. Often, people do not rename methods because it is really hard to go over a piece of code and rename exactly this right variable. If you are provided with a refactoring engine, you just need to press a couple of buttons, type in the new name, and it will be refactored in maybe just 30 seconds. The origin of the TypeScript project was these enormous JavaScript codebases. As these codebases became bigger and bigger, it became quite difficult to maintain them. These codebases basically became “write-only code” shared Anders Hejlsberg. He adds that this is why we need a semantic understanding of the code, which makes refactoring much easier. “This semantic understanding requires a type system to be in place and once you start adding that you add documentation to the code,” added Hejlsberg. Wall also supports the same thought that “good lexical scoping helps with refactoring”. The future of programming language design When asked about the future of programming design, James Gosling shared that a very underexplored area in programming is writing code for GPUs. He highlights the fact that currently, we do not have any programming language that works like a charm with GPUs and much work is needed to be done in that area. Anders Hejlsberg rightly mentioned that programming languages do not move with the same speed as hardware or all the other technologies. In terms of evolution, programming languages are more like maths and the human brain. He said, “We're still programming in languages that were invented 50 years ago, all of the principles of functional programming were thought of more than 50 years ago.” But, he does believe that instead of segregating into separate categories like object-oriented or functional programming, now languages are becoming multi-paradigm. [box type="shadow" align="" class="" width=""]“Languages are becoming more multi-paradigm. I think it is wrong to talk about oh I only like object-oriented programming, or imperative programming, or functional programming language.”[/box] Now, it is important to be aware of the recent researches, the new thinking, and the new paradigms. Then we need to incorporate them in our programming style, but tastefully. Watch this talk conducted by PuPPy to know more in detail. Python 3.8 alpha 2 is now available for testing ISO C++ Committee announces that C++20 design is now feature complete Using lambda expressions in Java 11 [Tutorial]
Read more
  • 0
  • 0
  • 38890

article-image-microsofts-metoo-reckoning-female-employees-speak-out-against-workplace-harassment-and-discrimination
Sugandha Lahoti
05 Apr 2019
7 min read
Save for later

Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination

Sugandha Lahoti
05 Apr 2019
7 min read
Microsoft was founded by Bill Gates and Paul Allen 44 years ago today on April 4, 1975. A lot has changed for the company since then in terms of its incredible growth story and inclusive business practices but some ghosts from its Silicon Valley bro culture past continue to linger. In 1978, the company started with eleven employees out of which only two were women; Andrea Lewis, technical writer, and Maria Wood bookkeeper; making women’s representation at 18 percent but with zero core engineering roles. Even with just two women, Microsoft was in troubles for its sexual conduct policies; Maria Wood left the company in 1983, suing it for sexual discrimination. She then disappeared from professional life to raise her children and volunteer for good causes. Andrea Lewis also left at almost the same time, eventually becoming a freelance journalist and fiction writer. Content note: this piece contains references to sexual harassment and abusive behaviors. As of June 30, 2018, the combined percentage of women who work at Microsoft and LinkedIn stood at 28 percent. For Microsoft alone, the percentage of women stood at 26.6 percent. The representation of women in technical roles is 19.9 percent and in leadership roles is 19.7 percent. The percentage of female interns at Microsoft stands at 42.5 percent in the past year. These numbers do not include the temp and contract workers at Microsoft. Source: Microsoft Blog Recently, Microsoft women employees shared their experiences on sexual harassment and discrimination they faced in the company in an email chain. According to Quartz, who first reported the news after reviewing more than 90 pages of emails, this chain has gained notice from the company’s senior leadership team. This chain, which started on March 20, was instigated Kathleen Hogan, Microsoft’s head of human resources. It was intended to be a way of organizing workers and raising the problems with CEO, Satya Nadella. In this regard it was successful. The company turned the weekly all-hands meeting on Thursday into a Q&A session where employees could discuss the toxic work culture and address the Microsoft leadership directly about the accusations that emerged in the thread. According to Wired, roughly 100 to 150 employees attended the Q&A in person, with many others watching via a live stream. Some female and male employees at the event wore all white, inspired by the congresswomen who wore "suffragette white" to the State of the Union in February. Responding to the concerns raised in the meeting, Nadella was  apparently empathetic and expressed sadness and disappointment. Hogan responded to the email chain on March 29 writing, “I would like to offer to anyone who has had such demeaning experiences including those who felt were dismissed by management or HR to email me directly, I will personally look into the situation with my team. I understand the devastating impact of such experiences, and [Nadella] wants to be made aware of any such behavior, and we will do everything we can to stop it.” What allegations of sexual harassment and abuse emerged in the Microsoft employees' email thread? One Microsoft Partner employee reportedly wrote in the email that she “was asked to sit on someone’s lap twice in one meeting in front of HR and other executives,” stating that they didn’t do anything in response to this violation of the company’s policy. “The person said that he did not have to listen and repeat the request a second time,” she wrote, according to Quartz. “No one said anything.” Another female Microsoft employee said that an employee of a Microsoft partner company threatened to kill her during a work trip if she didn’t engage in sexual acts, according to Quartz and Wired reports. “I raised immediate attention to HR and management,” she wrote, according to Quartz. “My male manager told me that ‘it sounded like he was just flirting’ and I should ‘get over it’. HR basically said that since there was no evidence, and this man worked for a partner company and not Microsoft, there was nothing they could do.” Another ex-Microsoft employee shared her story on Twitter. https://twitter.com/CindyGross/status/1113893229013995520 Another employee who had worked on the Xbox core team reportedly said in the email chain that being called a “bitch” was common within the company. She said the word had been used against her on more than one occasion, and even during roundtables where female members of the Xbox core team were in attendance. "Every woman, except for 1, had been called a bitch at work." “This thread has pulled the scab off a festering wound. The collective anger and frustration is palpable. A wide audience is now listening. And you know what? I’m good with that,” one Microsoft employee in the email chain wrote, according to Quartz. The problem is far bigger than Microsoft - it's the whole tech industry Sadly, reports of discriminatory and abusive behavior towards women are common across the tech industry. It would be wrong to see this as a Microsoft issue alone. For example, according to a 2016 survey, sixty percent of women working in Silicon Valley have experienced unwanted sexual advances. Two-thirds of these respondents said that these advances were from superiors - a clear abuse of power. Even a couple of years later, the report is a useful document that throws light on sexism and misogyny in an industry that remains dominated by men. According to court filings made public on Monday, this week, Women at Microsoft Corp working in U.S.-based technical jobs filed 238 internal complaints about gender discrimination or sexual harassment between 2010 and 2016. In response to these allegations, Kathleen Hogan sent an email to all Microsoft employees. Kathleen Hogan criticised In a medium blog post, Mitchel Lewis, criticised Hogan's email. He wrote "an embarrassing 10% of [Microsoft's] gender discrimination claims and 50% of their harassment claims, each of which had almost 90 or so instances last year, were found to lack merit by ERIT, which is a team comprised almost exclusively of lawyers on Microsoft’s payroll." He adds, "But as a staunch feminist, Kathleen did not address the fact that such a low rate of dignified claims can also serve as a correlate of an environment that discourages people to step forward with claims of abuse as it could be the result of an environment that is ripe for predation, corruption, and oppression." April Wensel, the founder of Compassionate Coding, shared her views on Twitter, both before and after the story broke out. Earlier in March, she had been at the receiving end of (arguably gendered) criticism from a senior male Microsoft employee who set up a Twitter poll to disprove Wensel’s observation that many tech workers are unhappy. Wensel noted, “The unchecked privilege is remarkable. Imagine trying to create positive change trapped in an organization that supports this kind of behavior." https://twitter.com/aprilwensel/status/1103178700130926592?s=20 https://twitter.com/aprilwensel/status/1113940085227933696 Microsoft Workers 4 good, a coalition of Microsoft employees, also tweeted showing support: https://twitter.com/MsWorkers4/status/1113934565737783296 https://twitter.com/MsWorkers4/status/1113934825931415552 The group had previously posted an open letter to Microsoft CEO in protest of the company’s $480 million deal with the U.S. Army to provide them with Hololens2. Other people also joined in solidarity with Microsoft's female employees: https://twitter.com/rod3000/status/1113924177428271104 https://twitter.com/OliviaGoldhill/status/1113836770871980034 https://twitter.com/cindygallop/status/1113889617755951104 Moving forward: keep the pressure on leadership How Microsoft chooses to move forward remains to be seen. Indeed, this is only a part of a broader story about an industry finally having to reckon with decades of sexism and marginalization. And while tackling it is clearly the right thing to do, whether that's possible at the moment is a huge question. One thing is for sure - it probably won't be tackled by leadership teams alone. The work done by grassroots-level organizations and figures like April Wensel is essential when it comes to effecting real change. BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”. Uber’s Head of corporate development, Cameron Poetzscher, resigns following a report on a 2017 investigation into sexual misconduct. Following Google, Facebook changes its forced arbitration policy for sexual harassment claims.
Read more
  • 0
  • 0
  • 24847

article-image-yuri-shkuro-on-observability-challenges-in-microservices-and-cloud-native-applications
Packt Editorial Staff
05 Apr 2019
11 min read
Save for later

Yuri Shkuro on Observability challenges in microservices and cloud-native applications

Packt Editorial Staff
05 Apr 2019
11 min read
In the last decade, we saw a significant shift in how modern, internet-scale applications are being built. Cloud computing (infrastructure as a service) and containerization technologies (popularized by Docker) enabled a new breed of distributed system designs commonly referred to as microservices (and their next incarnation, FaaS). Successful companies like Twitter and Netflix have been able to leverage them to build highly scalable, efficient, and reliable systems, and to deliver more features faster to their customers. In this article we explain the concept of observability in microservices, its challenges and traditional monitoring tools in microservices. This article is an extract taken from the book Mastering Distributed Tracing, written by Yuri Shkuro. This book will equip you to operate and enhance your own tracing infrastructure. Through practical exercises and code examples, you will learn how end-to-end tracing can be used as a powerful application performance management and comprehension tool. While there is no official definition of microservices, a certain consensus has evolved over time in the industry. Martin Fowler, the author of many books on software design, argues that microservices architectures exhibit the following common characteristics: Componentization via (micro)services Smart endpoints and dumb pipes Organized around business capabilities Decentralized governance Decentralized data management Infrastructure automation Design for failure Evolutionary design Because of the large number of microservices involved in building modern applications, rapid provisioning, rapid deployment via decentralized continuous delivery, strict DevOps practices, and holistic service monitoring are necessary to effectively develop, maintain, and operate such applications. The infrastructure requirements imposed by the microservices architectures spawned a whole new area of development of infrastructure platforms and tools for managing these complex cloud-native applications. In 2015, the Cloud Native Computing Foundation (CNCF) was created as a vendor-neutral home for many emerging open source projects in this area, such as Kubernetes, Prometheus, Linkerd, and so on, with a mission to "make cloud-native computing ubiquitous." Read more on Honeycomb CEO Charity Majors discusses observability and dealing with “the coming armageddon of complexity” [Interview] What is observability? The term "observability" in control theory states that the system is observable if the internal states of the system and, accordingly, its behavior, can be determined by only looking at its inputs and outputs. At the 2018 Observability Practitioners Summit, Bryan Cantrill, the CTO of Joyent and one of the creators of the tool dtrace, argued that this definition is not practical to apply to software systems because they are so complex that we can never know their complete internal state, and therefore the control theory's binary measure of observability is always zero (I highly recommend watching his talk on YouTube: https://youtu.be/U4E0QxzswQc). Instead, a more useful definition of observability for a software system is its "capability to allow a human to ask and answer questions". The more questions we can ask and answer about the system, the more observable it is. Figure 1: The Twitter debate There are also many debates and Twitter zingers about the difference between monitoring and observability. Traditionally, the term monitoring was used to describe metrics collection and alerting. Sometimes it is used more generally to include other tools, such as "using distributed tracing to monitor distributed transactions." The definition by Oxford dictionaries of the verb "monitor" is "to observe and check the progress or quality of (something) over a period of time; keep under systematic review." However, it is better scoped to describing the process of observing certain a priori defined performance indicators of our software system, such as those measuring an impact on the end-user experience, like latency or error counts, and using their values to alert us when these signals indicate an abnormal behavior of the system. Metrics, logs, and traces can all be used as a means to extract those signals from the application. We can then reserve the term "observability" for situations when we have a human operator proactively asking questions that were not predefined. As Bryan Cantrill put it in his talk, this process is debugging, and we need to "use our brains when debugging." Monitoring does not require a human operator; it can and should be fully automated. "If you want to talk about (metrics, logs, and traces) as pillars of observability–great. The human is the foundation of observability!"  -- BryanCantrill In the end, the so-called "three pillars of observability" (metrics, logs, and traces) are just tools, or more precisely, different ways of extracting sensor data from the applications. Even with metrics, the modern time series solutions like Prometheus, InfluxDB, or Uber's M3 are capable of capturing the time series with many labels, such as which host emitted a particular value of a counter. Not all labels may be useful for monitoring, since a single misbehaving service instance in a cluster of thousands does not warrant an alert that wakes up an engineer. But when we are investigating an outage and trying to narrow down the scope of the problem, the labels can be very useful as observability signals. The observability challenge of microservices By adopting microservices architectures, organizations are expecting to reap many benefits, from better scalability of components to higher developer productivity. There are many books, articles, and blog posts written on this topic, so I will not go into that. Despite the benefits and eager adoption by companies large and small, microservices come with their own challenges and complexity. Companies like Twitter and Netflix were successful in adopting microservices because they found efficient ways of managing that complexity. Vijay Gill, Senior VP of Engineering at Databricks, goes as far as saying that the only good reason to adopt microservices is to be able to scale your engineering organization and to "ship the org chart". So, what are the challenges of this design? There are quite a few: In order to run these microservices in production, we need an advanced orchestration platform that can schedule resources, deploy containers, autoscale, and so on. Operating an architecture of this scale manually is simply not feasible, which is why projects like Kubernetes became so popular. In order to communicate, microservices need to know how to find each other on the network, how to route around problematic areas, how to perform load balancing, how to apply rate limiting, and so on. These functions are delegated to advanced RPC frameworks or external components like network proxies and service meshes. Splitting a monolith into many microservices may actually decrease reliability. Suppose we have 20 components in the application and all of them are required to produce a response to a single request. When we run them in a monolith, our failure modes are restricted to bugs and potentially a crush of the whole server running the monolith. But if we run the same components as microservices, on different hosts and separated by a network, we introduce many more potential failure points, from network hiccups, to resource constraints due to noisy neighbors. The latency may also increase. Assume each microservice has 1 ms average latency, but the 99th percentile is 1s. A transaction touching just one of these services has a 1% chance to take ≥ 1s. A transaction touching 100 of these services has 1 - (1 - 0.01)100 = 63% chance to take ≥ 1s. Finally, the observability of the system is dramatically reduced if we try to use traditional monitoring tools. When we see that some requests to our system are failing or slow, we want our observability tools to tell us the story about what happens to that request. Traditional monitoring tools Traditional monitoring tools were designed for monolith systems, observing the health and behavior of a single application instance. They may be able to tell us a story about that single instance, but they know almost nothing about the distributed transaction that passed through it. These tools "lack the context" of the request. Metrics It goes like this: "Once upon a time…something bad happened. The end." How do you like this story? This is what the chart in Figure 2 tells us. It's not completely useless; we do see a spike and we could define an alert to fire when this happens. But can we explain or troubleshoot the problem? Figure 2: A graph of two time series representing (hypothetically) the volume of traffic to a service Metrics, or stats, are numerical measures recorded by the application, such as counters, gauges, or timers. Metrics are very cheap to collect, since numeric values can be easily aggregated to reduce the overhead of transmitting that data to the monitoring system. They are also fairly accurate, which is why they are very useful for the actual monitoring (as the dictionary defines it) and alerting. Yet the same capacity for aggregation is what makes metrics ill-suited for explaining the pathological behavior of the application. By aggregating data, we are throwing away all the context we had about the individual transactions. Logs Logging is an even more basic observability tool than metrics. Every programmer learns their first programming language by writing a program that prints (that is, logs) "Hello, World!" Similar to metrics, logs struggle with microservices because each log stream only tells us about a single instance of a service. However, the evolving programming paradigms creates other problems for logs as a debugging tool. Ben Sigelman, who built Google's distributed tracing system Dapper, explained it in his KubeCon 2016 keynote talk as four types of concurrency (Figure 3): Figure 3: Evolution of concurrency Years ago, applications like early versions of Apache HTTP Server handled concurrency by forking child processes and having each process handle a single request at a time. Logs collected from that single process could do a good job of describing what happened inside the application. Then came multi-threaded applications and basic concurrency. A single request would typically be executed by a single thread sequentially, so as long as we included the thread name in the logs and filtered by that name, we could still get a reasonably accurate picture of the request execution. Then came asynchronous concurrency, with asynchronous and actor-based programming, executor pools, futures, promises, and event-loop-based frameworks. The execution of a single request may start on one thread, then continue on another, then finish on the third. In the case of event loop systems like Node.js, all requests are processed on a single thread but when the execution tries to make an I/O, it is put in a wait state and when the I/O is done, the execution resumes after waiting its turn in the queue. Both of these asynchronous concurrency models result in each thread switching between multiple different requests that are all in flight. Observing the behavior of such a system from the logs is very difficult, unless we annotate all logs with some kind of unique id representing the request rather than the thread, a technique that actually gets us close to how distributed tracing works. Finally, microservices introduced what we can call "distributed concurrency." Not only can the execution of a single request jump between threads, but it can also jump between processes, when one microservice makes a network call to another. Trying to troubleshoot request execution from such logs is like debugging without a stack trace: we get small pieces, but no big picture. In order to reconstruct the flight of the request from the many log streams, we need powerful logs aggregation technology and a distributed context propagation capability to tag all those logs in different processes with a unique request id that we can use to stitch those requests together. We might as well be using the real distributed tracing infrastructure at this point! Yet even after tagging the logs with a unique request id, we still cannot assemble them into an accurate sequence, because the timestamps from different servers are generally not comparable due to clock skews. In this article we looked at the concept of observability and some challenges one has to face in microservices. We further discussed traditional monitoring tools for microservices. Applying distributed tracing to microservices-based architectures will be easy with Mastering Distributed Tracing written by Yuri Shkuro. 6 Ways to blow up your Microservices! Have Microservices killed the monolithic architecture? Maybe not! How to build Dockers with microservices  
Read more
  • 0
  • 0
  • 31723
article-image-tech-regulation-heats-up-australias-abhorrent-violent-material-bill-to-warrens-corporate-executive-accountability-act
Fatema Patrawala
04 Apr 2019
6 min read
Save for later

Tech regulation to an extent of sentence jail: Australia’s ‘Sharing of Abhorrent Violent Material Bill’ to Warren’s ‘Corporate Executive Accountability Act’

Fatema Patrawala
04 Apr 2019
6 min read
Businesses in powerful economies like USA, UK, Australia are as arguably powerful as politics or more than that. Especially now that we inhabit a global economy where an intricate web of connections can show the appalling employment conditions of Chinese workers who assemble the Apple smartphones we depend on. Amazon holds a revenue bigger than Kenya’s GDP. According to Business Insider, 25 major American corporations have revenues greater than the GDP of countries around the world. Because corporations create millions of jobs and control vast amounts of money and resources, their sheer economic power dwarfs government's ability to regulate and oversee them. With the recent global scale scandals that the tech industry has found itself in, with some resulting in deaths of groups of people, governments are waking up to the urgency for the need to hold tech companies responsible. While some government laws are reactionary, others are taking a more cautious approach. One thing is for sure, 2019 will see a lot of tech regulation come to play. How effective they are and what intended and unintended consequences they bear, how masterfully big tech wields its lobbying prowess, we’ll have to wait and see. Holding Tech platforms enabling hate and violence, accountable Australian govt passes law that criminalizes companies and execs for hosting abhorrent violent content Today, Australian parliament has passed legislation to crack down on violent videos on social media. The bill, described the attorney general, Christian Porter, as “most likely a world first”, was drafted in the wake of the Christchurch terrorist attack by a White supremacist Australian, when video of the perpetrator’s violent attack spread on social media faster than it could be removed. The Sharing of Abhorrent Violent Material bill creates new offences for content service providers and hosting services that fail to notify the Australian federal police about or fail to expeditiously remove videos depicting “abhorrent violent conduct”. That conduct is defined as videos depicting terrorist acts, murders, attempted murders, torture, rape or kidnap. The bill creates a regime for the eSafety Commissioner to notify social media companies that they are deemed to be aware they are hosting abhorrent violent material, triggering an obligation to take it down. While the Digital Industry Group which consists of Google, Facebook, Twitter, Amazon and Verizon Media in Australia has warned that the bill is passed without meaningful consultation and threatens penalties against content created by users. Sunita Bose, the group’s managing director says, “ with the vast volumes of content uploaded to the internet every second, this is a highly complex problem”. She further debates that “this pass it now, change it later approach to legislation creates immediate uncertainty to the Australia’s tech industry”. The Chief Executive of Atlassian Scott Farquhar said that the legislation fails to define how “expeditiously” violent material should be removed, and did not specify on who should be punished in the social media company. https://twitter.com/scottfarkas/status/1113391831784480768 The Law Council of Australia president, Arthur Moses, said criminalising social media companies and executives was a “serious step” and should not be legislated as a “knee-jerk reaction to a tragic event” because of the potential for unintended consequences. Contrasting Australia’s knee-jerk legislation, the US House Judiciary committee has organized a hearing on white nationalism and hate speech and their spread online. They have invited social media platform execs and civil rights organizations to participate. Holding companies accountable for reckless corporate behavior Facebook has undergone scandals after scandals with impunity in recent years given the lack of legislation in this space. Facebook has repeatedly come under the public scanner for data privacy breaches to disinformation campaigns and beyond. Adding to its ever-growing list of data scandals yesterday CNN Business uncovered  hundreds of millions of Facebook records were stored on Amazon cloud servers in a way that it allowed to be downloaded by the public. Earlier this month on 8th March, Sen. Warren has proposed to build strong anti-trust laws and break big tech companies like Amazon, Google, Facebook and Apple. Yesterday, she introduced Corporate Executive Accountability Act and also reintroduced the “too big to fail” bill a new piece of legislation that would make it easier to criminally charge company executives when Americans’ personal data is breached, among other corporate negligent behaviors. “When a criminal on the street steals money from your wallet, they go to jail. When small-business owners cheat their customers, they go to jail,” Warren wrote in a Washington Post op-ed published on Wednesday morning. “But when corporate executives at big companies oversee huge frauds that hurt tens of thousands of people, they often get to walk away with multimillion-dollar payouts.” https://twitter.com/SenWarren/status/1113448794912382977 https://twitter.com/SenWarren/status/1113448583771185153 According to Elizabeth, just one banker went to jail after the 2008 financial crisis. The CEO of Wells Fargo and his successor walked away from the megabank with multimillion-dollar pay packages after it was discovered employees had created millions of fake accounts. The same goes for the Equifax CEO after its data breach. The new legislation Warren introduced would make it easier to hold corporate executives accountable for their companies’ wrongdoing. Typically, it’s been hard to prove a case against individual executives for turning a blind eye toward risky or questionable activity, because prosecutors have to prove intent — basically, that they meant to do it. This legislation would change that, Heather Slavkin Corzo, a senior fellow at the progressive nonprofit Americans for Financial Reform, said to the Vox reporter. “It’s easier to show a lack of due care than it is to show the mental state of the individual at the time the action was committed,” she said. A summary of the legislation released by Warren’s office explains that it would “expand criminal liability to negligent executives of corporations with over $1 billion annual revenue” who: Are found guilty, plead guilty, or enter into a deferred or non-prosecution agreement for any crime. Are found liable or enter a settlement with any state or Federal regulator for the violation of any civil law if that violation affects the health, safety, finances, or personal data of 1% of the American population or 1% of the population of any state. Are found liable or guilty of a second civil or criminal violation for a different activity while operating under a civil or criminal judgment of any court, a deferred prosecution or non prosecution agreement, or settlement with any state or Federal agency. Executives found guilty of these violations could get up to a year in jail. And a second violation could mean up to three years. The Corporate Executive Accountability Act is yet another push from Warren who has focused much of her presidential campaign on holding corporations and their leaders responsible for both their market dominance and perceived corruption. Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws Zuckerberg wants to set the agenda for tech regulation in yet another “digital gangster” move Facebook under criminal investigations for data sharing deals: NYT report
Read more
  • 0
  • 0
  • 10615

article-image-over-30-ai-experts-join-shareholders-in-calling-on-amazon-to-stop-selling-rekognition-its-facial-recognition-tech-for-government-surveillance
Natasha Mathur
04 Apr 2019
6 min read
Save for later

Over 30 AI experts join shareholders in calling on Amazon to stop selling Rekognition, its facial recognition tech, for government surveillance

Natasha Mathur
04 Apr 2019
6 min read
Update, 12th April 2018: Amazon shareholders will now be voting on at the 2019 Annual Meeting of Shareholders of Amazon, on whether the company board should prohibit sales of Facial recognition tech to the government. The meeting will be held at 9:00 a.m., Pacific Time, on Wednesday, May 22, 2019, at Fremont Studios, Seattle, Washington.  Over 30 researchers from top tech firms (Google, Microsoft, et al), academic institutions and civil rights groups signed an open letter, last week, calling on Amazon to stop selling Amazon Rekognition to law enforcement. The letter, published on Medium, has been signed by the likes of this year’s Turing award winner, Yoshua Bengio, and Anima Anandkumar, a Caltech professor, director of Machine Learning research at NVIDIA, and former principal scientist at AWS among others. https://twitter.com/rajiinio/status/1113480353308651520 Amazon Rekognition is a deep-learning based service that is capable of storing and searching tens of millions of faces at a time. It allows detection of objects, scenes, activities and inappropriate content. However, Amazon Rekognition has long been a bone of contention among public eye and rights groups. This is due to the inaccuracies in its face recognition capability and over the concerns that selling Rekognition to law enforcement can hamper public privacy. For instance, an anonymous Amazon employee spoke out against Amazon selling its facial recognition technology to the police, last year, calling it a “Flawed technology”. Also, a group of seven House Democrats sent a letter to Amazon CEO, last November, over Amazon Rekognition, raising concerns and questions about its accuracy and the possible effects. Moreover, a group of over 85 coalition groups sent a letter to Amazon, earlier this year, urging the company to not sell its facial surveillance technology to the government. Researchers argue against unregulated Amazon Rekognition use Researchers state in the letter that a study conducted by Inioluwa Deborah Raji and Joy Buolamwini shows that Rekognition possesses much higher error rates and is imprecise in classifying the gender of darker skinned women than lighter skinned men. However, Dr. Matthew Wood, general manager, AI, AWS and Michael Punke, vice president of global public policy, AWS, were irreverent about the research and disregarded it by labeling it as “misleading”. Dr. Wood also stated that “facial analysis and facial recognition are completely different in terms of the underlying technology and the data used to train them. Trying to use facial analysis to gauge the accuracy of facial recognition is ill-advised”.  Researchers in the letter have called on that statement saying that it is 'problematic on multiple fronts’. The letter also sheds light on the real world implications of the misuse of face recognition tools. It talks about Clare Garvie, Alvaro Bedoya and Jonathan Frankle of the Center on Privacy & Technology at Georgetown Law who studies law enforcement’s use of face recognition. According to them, using face recognition tech can put the wrong people to trial due to cases of mistaken identity. Also, it is quite common that the law enforcement operators are neither aware of the parameters of these tools, nor do they know how to interpret some of their results. Relying on decisions from automated tools can lead to “automation bias”. Another argument Dr. Wood makes to defend the technology is that “To date (over two years after releasing the service), we have had no reported law enforcement misuses of Amazon Rekognition.”However, the letter states that this is unfair as there are currently no laws in place to audit Rekognition’s use. Moreover, Amazon has not disclosed any information about its customers or any details about the error rates of Rekognition across different intersectional demographics. “How can we then ensure that this tool is not improperly being used as Dr. Wood states? What we can rely on are the audits by independent researchers, such as Raji and Buolamwini..that demonstrates the types of biases that exist in these products”, reads the letter. Researchers say that they find Dr. Wood and Mr. Punke’s response to the peer-reviewed research is ‘disappointing’ and hope Amazon will dive deeper into examining all of its products before deciding on making it available for use by the Police. More trouble for Amazon: SEC approves Shareholders’ proposal for need to release more information on Rekognition Just earlier this week, the U.S. Securities and Exchange Commission (SEC) announced a ruling that considers Amazon shareholders’ proposal to demand Amazon to provide more information about the company’s use and sale of biometric facial recognition technology as appropriate. The shareholders said that they are worried about the use of Rekognition and consider it a significant risk to human rights and shareholder value. Shareholders mentioned two new proposals regarding Rekognition and requested their inclusion in the company’s proxy materials: The first proposal called on Board of directors to prohibit the selling of Rekognition to the government unless it has been evaluated that the tech does not violate human and civil rights. The second proposal urges Board Commission to conduct an independent study of Rekognition. This would further help examine the risks of Rekognition on the immigrants, activists, people of color, and the general public of the United States. Also, the study would help analyze how such tech is marketed and sold to foreign governments that may be “repressive”, along with other financial risks associated with human rights issues. Amazon chastised the proposals and claimed that both the proposals should be discarded under the subsections of Rule 14a-8 as they related to the company’s “ordinary business and operations that are not economically significant”. But, SEC’s Division of Corporation Finance countered Amazon’s arguments. It told Amazon that it is unable to conclude that “proposals are not otherwise significantly related to the Company’s business” and approved their inclusion in the company’s proxy materials, reports Compliance Week. “The Board of Directors did not provide an opinion or evidence needed to support the claim that the issues raised by the Proposals are ‘an insignificant public policy issue for the Company”, states the division. “The controversy surrounding the technology threatens the relationship of trust between the Company and its consumers, employees, and the public at large”. SEC Ruling, however, only expresses informal views, and whether Amazon is obligated to accept the proposals can only be decided by the U.S. District Court should the shareholders further legally pursue these proposals.   For more information, check out the detailed coverage at Compliance Week report. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers Amazon Rekognition can now ‘recognize’ faces in a crowd at real-time
Read more
  • 0
  • 0
  • 27940
Modal Close icon
Modal Close icon