Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-github-october-21st-outage-rca-how-prioritizing-data-integrity-launched-a-series-of-unfortunate-events-that-led-to-a-day-long-outage
Natasha Mathur
31 Oct 2018
7 min read
Save for later

GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage

Natasha Mathur
31 Oct 2018
7 min read
Yesterday, GitHub posted the root-cause analysis of its outage that took place on 21st October. The outage started at 23:00 UTC on 21st October and left the site broken until 23:00 UTC, 22nd October. Although the backend git services were up and running during the outage, multiple internal systems were affected. Users were unable to log in, submit Gists or bug reports, outdated files were being served, branches went missing, and so forth. Moreover, GitHub couldn’t serve webhook events or build and publish GitHub Pages sites. “At 22:52 UTC on October 21, routine maintenance work to replace failing 100G optical equipment resulted in the loss of connectivity between our US East Coast network hub and our primary US East Coast data center. Connectivity between these locations was restored in 43 seconds, but this brief outage triggered a chain of events that led to 24 hours and 11 minutes of service degradation” mentioned the GitHub team. GitHub uses MySQL to store GitHub metadata. It operates multiple MySQL clusters of different sizes. Each cluster consists of up to dozens of read replicas that help GitHub store non-Git metadata. These clusters are how GitHub’s applications are able to provide pull requests and issues, manage authentication, coordinate background processing, and serve additional functionality beyond raw Git object storage. For improved performance, GitHub applications direct writes to the relevant primary for each cluster, but delegate read requests to a subset of replica servers. Orchestrator is used to managing the GitHub’s MySQL cluster topologies. It also handles automated failover. Orchestrator considers a number of factors during this process and is built on top of Raft for consensus. In some cases, Orchestrator implements topologies that the applications are unable to support, which is why it is very crucial to align Orchestrator configuration with application-level expectations. Here’s a timeline of the events that took place on 21st October leading to the Outage 22:52 UTC, 21st Oct Orchestrator began a process of leadership deselection as per the Raft consensus. After the Orchestrator managed to organize the US West Coast database cluster topologies and the connectivity got restored, write traffic started directing to the new primaries in the West Coast site.   The database servers in the US East Coast data center contained writes that had not been replicated to the US West Coast facility. Due to this, the database clusters in both the data centers included writes that were not present in the other data center. This is why the GitHub team was unable to failover (a procedure via which a system automatically transfers control to a duplicate system on detecting failures) the primaries back over to the US East Coast data center safely. 22:54 UTC, 21st Oct GitHub’s internal monitoring systems began to generate alerts indicating that the systems are undergoing numerous faults. By 23:02 UTC, GitHub engineers found out that the topologies for numerous database clusters were in an unexpected state. Later, Orchestrator API displayed a database replication topology including the servers only from the US West Coast data center.   23:07 UTC, 21st Oct The responding team then manually locked the deployment tooling to prevent any additional changes from being introduced. At 23:09 UTC, the site was placed into yellow status. At 23:11 UTC, the incident coordinator changed the site status to red.   23:13 UTC, 21st Oct As the issue had affected multiple clusters, additional engineers from GitHub’s database engineering team started investigating the current state. This was to determine the actions that should be taken to manually configure a US East Coast database as the primary for each cluster and rebuild the replication topology. This was quite tough as the West Coast database cluster had ingested writes from GitHub’s application tier for nearly 40 minutes.   To preserve this data, engineers decided that 30+ minutes of data written to the US West Coast data center. This prevented them from considering options other than failing-forward in order to keep the user data safe. So, they further extended the outage to ensure the consistency of the user’s data. 23:19 UTC, 21st Oct After querying the state of the database clusters, GitHub stopped running jobs that write metadata about things such as pushes. This lead to partially degraded site usability as the webhook delivery and GitHub Pages builds had been paused.   “Our strategy was to prioritize data integrity over site usability and time to recovery” as per the GitHub team. 00:05 UTC, 22nd Oct Engineers started resolving data inconsistencies and implementing failover procedures for MySQL.Recovery plan included failing forward, synchronization, fall back, then churning through backlogs before returning to green.   The time needed to restore multiple terabytes of backup data caused the process to take hours. The process to decompress, checksum, prepare, and load large backup files onto newly provisioned MySQL servers took a lot of time. 00:41 UTC, 22nd Oct A backup process started for all affected MySQL clusters. Multiple teams of engineers started to investigate ways to speed up the transfer and recovery time.   06:51 UTC, 22nd Oct Several clusters completed restoration from backups in the US East Coast data center and started replicating new data from the West Coast. This resulted in slow site load times for pages executing a write operation over a cross-country link.   The GitHub team identified the ways to restore directly from the West Coast in order to overcome the throughput restrictions caused by downloading from off-site storage. The status page was further updated to set an expectation of two hours as the estimated recovery time. 07:46 UTC, 22nd Oct GitHub published a blog post for more information. “We apologize for the delay. We intended to send this communication out much sooner and will be ensuring we can publish updates in the future under these constraints”, said the GitHub team.   11:12 UTC, 22nd Oct All database primaries established in US East Coast again. This resulted in the site becoming far more responsive as writes were now directed to a database server located in the same physical data center as GitHub’s application tier. This improved performance substantially but there were dozens of database read replicas that delayed behind the primary. These delayed replicas made users experience inconsistent data on GitHub.   13:15 UTC, 22nd Oct GitHub.com started to experience peak traffic load and the engineers began to provide the additional MySQL read replicas in the US East Coast public cloud earlier in the incident.   16:24 UTC, 22nd Oct Once the replicas got in sync, a failover to the original topology was conducted. This addressed the immediate latency/availability concerns. The service status was kept red while GitHub began processing the backlog of data accumulated. This was done to prioritize data integrity.   16:45 UTC, 22nd Oct At this time, engineers had to balance the increased load represented by the backlog. This potentially overloaded GitHub’s ecosystem partners with notifications. There were over five million hook events along with 80 thousand Pages builds queued.   “As we re-enabled processing of this data, we processed ~200,000 webhook payloads that had outlived an internal TTL and were dropped. Upon discovering this, we paused that processing and pushed a change to increase that TTL for the time being”, mentions the GitHub team.   To avoid degrading the reliability of their status updates, GitHub remained in degraded status until the entire backlog of data had been processed. 23:03 UTC, 22nd Oct At this point in time, all the pending webhooks and Pages builds had been processed. The integrity and proper operation of all systems had also been confirmed. The site status got updated to green.     Apart from this, GitHub has identified a number of technical initiatives and continue to work through an extensive post-incident analysis process internally. “All of us at GitHub would like to sincerely apologize for the impact this caused to each and every one of you. We’re aware of the trust you place in GitHub and take pride in building resilient systems that enable our platform to remain highly available. With this incident, we failed you, and we are deeply sorry”, said the GitHub team. For more information, check out the official GitHub Blog post. Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows GitHub is bringing back Game Off, its sixth annual game building competition, in November GitHub comes to your code Editor; GitHub security alerts now have machine intelligence
Read more
  • 0
  • 0
  • 15034

article-image-richard-devaul-alphabet-executive-resigns-after-being-accused-of-sexual-harassment
Natasha Mathur
31 Oct 2018
2 min read
Save for later

Richard DeVaul, Alphabet executive, resigns after being accused of sexual harassment

Natasha Mathur
31 Oct 2018
2 min read
It was only last week when the New York Times reported shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. Now, Richard DeVaul, a director at unit X of Alphabet (Google’s parent company), resigned from the company, yesterday, after being accused of sexually harassing Star Simpson, a hardware engineer. DeVaul has not received any exit package on his resignation. As per the NY times report, Richard DeVaul interviewed Star Simpson for a job reporting to him. He then further invited her to a Burning Man, an annual festival in the Nevada desert, the next week. Mr. DeVaul then sexually harassed Simpson at his encampment at Burning Man, as DeVaul made inappropriate requests to Simpson. Later when Simpson reported to Google regarding DeVaul’s sexual misconduct two years later, one of the company officials shrugged her off by saying the story was “more likely than not” true and that appropriate corrective actions had been taken. DeVaul had apologized in a statement to the New York Times saying that the incident was "an error in judgment. Sundar Pichai, Google’s CEO, further apologized yesterday, saying that the “apology at TGIF didn't come through, and it wasn't enough” in an e-mail obtained by Axios. Pichai will also be supporting the women engineers at Google, who are organizing a “women’s walk” walkout tomorrow to protest. “I am taking in all your feedback so we can turn these ideas into action. We will have more to share soon. In the meantime, Eileen will make sure managers are aware of the activities planned for Thursday and that you have the support you need”, wrote Pichai. Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices
Read more
  • 0
  • 0
  • 14180

article-image-aragon-0-6-released-on-mainnet-allowing-aragon-organizations-to-run-on-ethereum-mainnet
Amrata Joshi
31 Oct 2018
3 min read
Save for later

Aragon 0.6 released on Mainnet allowing Aragon organizations to run on Ethereum Mainnet

Amrata Joshi
31 Oct 2018
3 min read
Yesterday, the team at Aragon, announced the release of Aragon 0.6, named Alba, on Ethereum Mainnet. It’s now possible to create Aragon organizations on the Ethereum Mainnet. Earlier, the organizations were running on Ethereum testnets, without real-world inferences. Aragon 0.5 was released seven months ago and since then, more than 2,500 organizations have been created with it. The total number of Aragon organizations have now crossed 15,000. Aragon 0.5 was the first release to get powered by AragonOS. This release was only deployed on the Rinkeby Ethereum Testnet. Major updates in Aragon 0.6 Permissions Permissions are a dynamic and powerful way to customize the organization. They manage who can access resources on your organization, and how. For example, one can create an organization in which, funds can be withdrawn only after the voting is done. The votes can be only created by a board of experts, allowing anyone in the organization to cast votes. Peers can also vote to create tokens to add new members. Possibilities are endless with ‘Permissions’ as any governance process could be now implemented. Source: Aragon 2. Voting gets easier Voting enables participation and collaborative decision-making. The team at Aragon have rebuilt the card-based voting interface from the ground up. This interface helps one to a look at the votes at a glance. Source: Aragon 3.  AragonOS 4 Aragon 0.6 features AragonOS 4, a smart contract framework for building DAOs, dapps and protocols. The AragonOS 4 is yet to be released but has managed to create some buzz. Its architecture is based on the idea of a decentralized organization as an aggregate of multiple applications. The architecture also involves the use of the Kernel which governs how these applications can talk to each other and also how other entities can interact with them. AragonOS 4 makes the interaction with Aragon organizations even more secure and stable. It’s easy to create your own decentralized organization now. You can start by choosing the network for your organization and follow the next steps on Aragon’s official website. Note: The official blog post suggests to not place large amounts of funds in Aragon 0.6 organizations at this point as there might be some unforeseen situations where user funds could be at risk. Read more about Aragon 0.6 on the Aragon’s official blog post. Stable version of OpenZeppelin 2.0, a framework for smart blockchain contracts, released! IBM launches blockchain-backed Food Trust network which aims to provide greater transparency on food supply chains 9 recommended blockchain online courses
Read more
  • 0
  • 0
  • 2545

article-image-cockroach-labs-announced-managed-cockroachdb-as-a-service
Amrata Joshi
31 Oct 2018
3 min read
Save for later

Cockroach Labs announced managed CockroachDB-as-a-Service

Amrata Joshi
31 Oct 2018
3 min read
This week, Cockroach Labs announced the availability of Managed CockroachDB. CockroachDB,  a geo-distributed database, is a fully hosted and managed service, created and run by Cockroach Labs. It is an open source tool that makes deploying, scaling, and managing CockroachDB effortless. Last year, the company announced version 1.0 of CockroachDB and $27 million in Series B financing, which was led by Redpoint along with the participation from Benchmark, GV, Index Ventures and FirstMark. Managed CockroachDB is also cloud agnostic and available on AWS and GCP. The goal is to allow development teams to focus on building highly scalable applications without worrying about infrastructure operations. CockroachDB’s design makes data easy by providing an industry-leading model for horizontal scalability and resilience to accommodate fast-growing businesses. It also improves the ability to move data closer to the customers depending upon their geo-location. [box type="shadow" align="" class="" width=""] Fun Fact: Why the name ‘Cockroach’? In a post, published by Cockroach Labs, three years back, Spencer Kimball, CEO at Cockroach Labs, said, “You’ve heard the theory that cockroaches will be the only survivors post-apocalypse? Turns out modern database systems have a lot to gain by emulating one of nature’s oldest and most successful designs. Survive, replicate, proliferate. That’s been the cockroach model for geological ages, and it’s ours too.” [/box] Features of  Managed CockroachDB Always on service: Managed CockroachDB is an always on service for critical applications as it automatically replicates data across three availability zones for single region deployments. As a globally scalable distributed SQL database, CockroachDB also supports geo-partitioned clusters at whatever scale the business demands. Cockroach Labs manages the hardware provisioning, setup and configuration for the managed clusters so that they are optimized for performance. Since CockroachDB is cloud agnostic, one can migrate from one cloud service provider to another at peak load with zero downtime. Automatic upgrades to the latest releases, and hourly incremental backups of the data makes the working more easier. The Cockroach Labs team provides, 24x7 monitoring, and enterprise grade security for all the customers. CockroachDB provides the capabilities for building ultra-resilient, high-scale and global applications. It features distributed SQL with ACID (Atomicity, Consistency, Isolation, Durability) transactions. Features like, cluster visualization, priority support, native JSON support and automated scaling makes it even more unique. Read more about this announcement on the Cockroach Labs official website. SQLite adopts the rule of St. Benedict as its Code of Conduct, drops it to adopt Mozilla’s community participation guidelines, in a week MariaDB acquires Clustrix to give database customers ‘freedom from Oracle lock-in’ Why Neo4j is the most popular graph database
Read more
  • 0
  • 0
  • 12462

article-image-facebook-open-sources-a-set-of-linux-kernel-products-including-bpf-btrfs-cgroup2-and-others-to-address-production-issues
Bhagyashree R
31 Oct 2018
3 min read
Save for later

Facebook open sources a set of Linux kernel products including BPF, Btrfs, Cgroup2, and others to address production issues

Bhagyashree R
31 Oct 2018
3 min read
Yesterday, Facebook open sourced a suite of Linux kernel components and tools. This suite includes products that can be used for resource control and utilization, workload isolation, load balancing, measuring, monitoring, and much more. Facebook has already started using these products on a massive scale throughout its infrastructure and many other organizations are also adopting them. The following are some of the products that they have open sourced: Berkeley Packet Filter (BPF) BPF is a highly-flexible Linux kernel code execution engine. It enables safe and easy modifications of kernel behaviors with custom code by allowing bytecode to run at various hook points. Currently, it is being widely used for networking, tracing and security in a number of Linux kernel subsystems. What can you do with it? You can extend the Linux kernel behavior for a variety of purposes such as load balancing, container networking, kernel tracing, monitoring, and security. You can solve those production issues where user-space solutions alone aren’t enough by executing the user-space code in the kernel. Btrfs Btrfs is a copy-on-write (CoW) filesystem, which means that instead of overwriting in one place, all the updates to metadata or file data are written to a new location on the disk. Btrfs mainly focuses on fault tolerance, repair, and easy administration. It supports features such as snapshots, online defragmentation, pooling, and integrated multiple device support. It is the only filesystem implementation that works with resource isolation. What can you do with it? You can address and manage large storage subsystems by leveraging features like snapshots, load balancing, online defragmentation, pooling, and integrated multiple device support. You can manage, detect, and repair errors with data and metadata checksums, mirroring, and file self-healing. Netconsd (Netconsole daemon) Netconsd is a UDP-based daemon that provides lightweight transport for Linux netconsole messages. It receives and processes log data from the Linux kernel and serves it up as a structured data. Simply put, it is a kernel module that sends all kernel log messages over the network to another computer, without involving user space. What can you do with it? Detect, reorder, or request retransmission of missing messages with the provided metadata. Extract meaningful signal from the data logged by netconsd to rapidly identify and diagnose misbehaving services. Cgroup2 Cgroup2 is a Linux kernel feature that allows you to group and structure workloads and also control the amount of system resources assigned to each group. It consists of controllers for memory, I/O, central processing unit, and more. Using cgroup2, you can isolate workloads, prioritize, and configure the distribution of resources. What can you do with it? You can create isolated groups of processes and then control and measure the distribution of memory, IO, CPU and other resources for each group. You can detect resource shortages using PSI pressure metrics for memory, IO, and CPU with cgroup2. With cgroup2, production engineers will be able to deal with increasing resource pressure more proactively and prevent conflicts between workloads. Along with these products, they have open-sourced Pressure Stall Information (PSI), oomd, and many others. You can find the complete list of these products at Facebook Open Source website and also check out the official announcement. Facebook open sources QNNPACK, a library for optimized mobile deep learning Facebook introduces two new AI-powered video calling devices “built with Privacy + Security in mind” Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others
Read more
  • 0
  • 0
  • 11843

article-image-helium-proves-to-be-less-than-an-ideal-gas-for-iphones-and-apple-watches
Prasad Ramesh
31 Oct 2018
3 min read
Save for later

Helium proves to be less than an ‘ideal gas’ for iPhones and Apple watches

Prasad Ramesh
31 Oct 2018
3 min read
‘Hey, turn off the Helium it’s bad for my iPhone’ is not something you hear every day. In an unusual event at a facility, this month, an MRI machine affected iPhones and Apple watches. In a facility, many iPhone users started to experience issues with their devices. The devices stopped working. Originally an EMP burst was suspected to shut down the devices. But it was noted that only iPhone 6 and above, Apple Watch series 0 and above were affected. The only iPhone 5 in the building and Android phones remained functional. Luckily none of the patients reported any issue. The reason found was a new MEMS oscillator used in the newer affected devices. These tiny devices are used to measure time and can work properly only in certain conditions like a vacuum or a specific gas surrounding the piece. Helium being a sneaky one atom gas, can get through the tiniest of crevices. An MRI machine was being installed and in the process, the coolant, Helium leaked. Approximately 120 liters of Helium leaked in the span of 5 hours. Helium expands hundreds of times when it turns to gas from liquid, with a boiling point of around −268 °C it did so in room temperature. You could say a large part of a building could be flooded with the gas given 120 liters. Apple does mention it in their official iPhone help guide: “Exposing iPhone to environments having high concentrations of industrial chemicals, including near evaporating liquified gasses such as helium, may damage or impair iPhone functionality.” So what if your device is affected? Apple also mentions: “If your device has been affected and shows signs of not powering on, the device can typically be recovered. Leave the unit unconnected from a charging cable and let it air out for approximately one week. The helium must fully dissipate from the device, and the device battery should fully discharge in the process. After a week, plug your device directly into a power adapter and let it charge for up to one hour. Then the device can be turned on again.” The original poster on Reddit, harritaco even performed an experiment and posted it on YouTube. Although much doesn't happen in the video of 8 minutes, he says to have repeated it for 12 minutes and the phone turned off. For more details and discussions, visit the Reddit thread. A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Apple launches iPad Pro, updates MacBook Air and Mac mini Facebook and NYU are working together to make MRI scans 10x faster
Read more
  • 0
  • 0
  • 14109
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-babyai-a-research-platform-for-grounded-language-learning-with-human-in-the-loop-by-yoshua-bengio-et-al
Amrata Joshi
31 Oct 2018
4 min read
Save for later

BabyAI: A research platform for grounded language learning with human in the loop, by Yoshua Bengio et al

Amrata Joshi
31 Oct 2018
4 min read
Last week, researchers from the University of Montreal, University of Oregon, and IIT Bombay published a paper, titled, ‘BabyAI: First Steps Towards Grounded Language Learning With a Human In the Loop’ that introduces, BabyAI platform. This platform provides a heuristic expert agent for the purpose of simulating a human teacher and also supports humans in the loop for grounded language learning. BabyAI platform uses synthetic Baby Language for giving instructions to the agent. The researchers have implemented a 2D gridworld environment called MiniGrid for convenience, as it’s simple and easy to work on. The BabyAI platform includes a verifier that checks if an agent performing a sequence of actions, in a given environment, has successfully achieved its goal or not. Why BabyAI platform is introduced? It’s difficult for humans to train an intelligent agent for understanding natural language instructions. No matter how advanced AI technology becomes, human users would always want to customize their intelligent helpers to be able to understand their desires and needs, better. The main obstacle in language learning with a human in the loop is the amount of data that would be required. Deep learning methods, used in the context of imitation learning or reinforcement learning paradigms, could be effective but even they require enormous amounts of data. This data could be in the form of millions of reward function queries or thousands of demonstrations. BabyAI platform works on data efficiency, as the researchers measure the minimum number of samples required to solve several levels with imitation and reinforcement learning baselines. Also, the platform and pretrained models will be available online. This surely will improve data efficiency of grounded language learning. The Baby Language Baby Language is a combinatorially rich subset of English, designed to be easily understood by humans. In this language, the agent can be instructed to go to the objects, pick them up, open doors, and put objects next to the other ones. The language can also express the combination of several such tasks. For example “put a red ball next to the green box after you open the door". In order to keep the instructions readable by humans, the researchers have kept the language simple. But the language still exhibits interesting combinatorial properties and contains 2.48 × 1019 possible instructions. There a couple of structural restrictions on this language including: The ‘and connector’ can only appear inside the ‘then and after forms’ Instructions can contain only one ‘then’ or ‘after’ word. MiniGrid: The environment that supports the BabyAI platform Since the data-efficiency studies are very expensive as multiple runs are required for different amounts of data, the researchers have aimed to keep the design of the environment, minimalistic. The researchers have implemented MiniGrid, an open source, partially observable 2D gridworld environment. This environment is fast and lightweight. It is available online and supports integration with OpenAI Gym. Experiments conducted and the results The researchers assess the difficulty of BabyAI levels by training an imitation learning baseline for each level. Moreover, they have estimated as to how much data is required to solve some of the simpler levels. They have also studied to what extent can the data demands be reduced with the help of basic curriculum learning and interactive teaching methods. The results suggest that the current imitation learning and reinforcement learning methods, both scale and generalise poorly when it comes to learning tasks with a compositional structure. Thousands of demonstrations are needed to learn tasks which seem trivial by human standards. Methods like curriculum learning and interactive learning can provide measurable improvements in terms of data efficiency. But for involving an actual human in the loop, an improvement of at least three orders of magnitude is required. Future Scope The direction of future research is towards finding strategies to improve data efficiency of language learning. Tackling such a challenge might require new models and teaching methods. Approaches that involve an explicit notion of modularity and subroutines, such as Neural Module Networks or Neural Programming Interpreters also seems to be a promising direction. To know more about BabyAI platform, check out the research paper published by Yoshua Bengio et. al. Facebook open sources QNNPACK, a library for optimized mobile deep learning Facebook’s Child Grooming Machine Learning system helped remove 8.7 million abusive images of children Optical training of Neural networks is making AI more efficient
Read more
  • 0
  • 0
  • 12715

article-image-microsoft-azure-reportedly-chooses-xilinx-chips-over-intel-altera-for-ai-co-processors-says-bloomberg-report
Melisha Dsouza
31 Oct 2018
3 min read
Save for later

Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report

Melisha Dsouza
31 Oct 2018
3 min read
Xilinx Inc., has reportedly won orders from Microsoft Corp.’s Azure cloud unit to account for half of the co-processors currently used on Azure servers to handle machine-learning workloads. This will replace the chips made by Intel Corp, according to people familiar with Microsoft's’ plans as reported by Bloomberg Microsoft’s decision comes with effect to add another chip supplier in order to serve more customers interested in machine learning. To date, this domain was run by Intel’s Altera division. Now that Xilinx has bagged the deal, does this mean Intel will no longer serve Microsoft? Bloomberg reported Microsoft’s confirmation that it will continue its relationship with Intel in its current offerings. A Microsoft spokesperson added that “There has been no change of sourcing for existing infrastructure and offerings”. Sources familiar with the arrangement also commented on how Xilinx chips will have to achieve performance goals to determine the scope of their deployment. Cloud vendors these days are investing heavily in research and development centering around the machine learning field. The past few years has seen an increase in need of flexible chips that can be configured to run machine-learning services. Companies like Microsoft, Google and Amazon are massive buyers of server chips and are always looking for alternatives to standard processors to increase the efficiency of their data centres. Holger Mueller, an analyst with Constellation Research Inc., told SiliconANGLE that “Programmable chips are key to the success of infrastructure-as-a-service providers as they allow them to utilize existing CPU capacity better, They’re also key enablers for next-generation application technologies like machine learning and artificial intelligence.” Earlier this year, Xilinx CEO Victor Peng made clear his plans to focus on data center customers, saying “data center is an area of rapid technology adoption where customers can quickly take advantage of the orders of magnitude performance and performance per-watt improvement that Xilinx technology enables in applications like artificial intelligence (AI) inference, video and image processing, and genomics” Last month, Xilinx made headlines with the announcement of a new breed of computer chips designed specifically for AI inference. These chips combine FPGAs with two higher-performance Arm processors, plus a dedicated AI compute engine and relates to the application of deep learning models in consumer and cloud environments. The chips promise higher throughput, lower latency and greater power efficiency than existing hardware. Looks like Xilinx is taking noticeable steps to make itself seen in the AI market. Head over to Bloomberg for the complete coverage of this news. Microsoft Ignite 2018: New Azure announcements you need to know Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud    
Read more
  • 0
  • 0
  • 18059

article-image-apple-launches-ipad-pro-updates-macbook-air-and-mac-mini
Prasad Ramesh
31 Oct 2018
3 min read
Save for later

Apple launches iPad Pro, updates MacBook Air and Mac mini

Prasad Ramesh
31 Oct 2018
3 min read
At an event in Brooklyn, New York yesterday, Apple unveiled the new iPad Pro, the new MacBook Air, and Mac mini. iPad Pro Following the trend, the new iPad Pro sports a larger screen to body ratio with minimal bezels. Powering the new iPad is an eight-core A12X Bionic chip which is powerful enough for Photoshop CC, coming in 2019. There is a USB-C connector, Gigabit-class LTE, and up to 1TB of storage. There are two variants with 11-inch and 12.9-inch Liquid Retina displays. Source: Apple The display can go up to 120Hz for smooth scrolling but the headphone jack is removed. Battery life is stated to be 10-hour long. The dedicated Neural engine supports tasks requiring machine learning from photography to AR. Apple is calling it the ‘best device ever for AR’ due to its cameras, sensors, improved four-speaker audio combined with the power of the A12X Bionic chip. There is also a second generation Apple Pencil that magnetically attaches to iPad Pro and charges at the same time. The Smart Keyboard Folio is made for versatility. The Keyboard and Apple Pencil are sold separately. MacBook Air The new MacBook Air features a 13-inch Retina display, Touch ID, a newer i5 processor, and more portable design compared to the previous MacBook. This MacBook Air is the cheapest Macbook to sport a Retina display with a resolution of 2560×1600. There is a built-in 720p FaceTime camera. For better security, there is TouchID—a fingerprint sensor built into the keyboard and a T2 security chip. Source: Apple Each key is individually lit in the keyboard and the Touchpad area is also larger. The new MacBook Air comes with an 8th generation Intel Core i5 processor, Intel UHD Graphics, and a faster 2133 MHz RAM up to 16GB. The storage options are available up to 1.5TB. There are only two Thunderbolt 3 USB-C ports and a 3.5mm headphone jack, no other connectors. Apple says that the new MacBook Air is faster and provides a snappier experience. Mac mini The Mac got a big performance boost being five times faster than the previous one. There are options for either four- or six-core processors, with turbo boost that can go upto 4.6GHz. Both versions come with an Intel UHD Graphics 630. For memory, there is up to 64GB of 2666 MHz RAM. Source: Apple The new Mac mini also features a T2 security chip. So files stored on the SSD are automatically and fully encrypted. There are four Thunderbolt 3 ports and a 10-gigabit Ethernet port. There is HDMI 2.0 port, two USB-A ports, a 3.5mm audio jack. The storage options are available up to 2TB. Apple says that both the MacBook Air and the Mac mini are made with 100% recycled aluminum which reduces the carbon footprint of these devices by 50%. Visit the Apple website to see availability and pricing of the iPad Pro, MacBook Air, and Mac mini. ‘Think different’ makes Apple the world’s most valuable company, crossing $1 Trillion market cap Apple releases iOS 12 beta 2 with screen time and battery usage updates among others Apple and Amazon take punitive action against Bloomberg’s ‘misinformed’ hacking story
Read more
  • 0
  • 0
  • 12743

article-image-fedora-29-released-with-modularity-silverblue-and-more
Bhagyashree R
31 Oct 2018
3 min read
Save for later

Fedora 29 released with Modularity, Silverblue, and more

Bhagyashree R
31 Oct 2018
3 min read
After releasing Fedora 29 beta last month, the Fedora community announced the stable release of Fedora 29. This is the first release to include Fedora Modularity across all Fedora variants, that is, Workstation, Server, and AtomicHost. Other updates include upgrading to GNOME 3.30, ZRAM for ARM images, and a Vagrant image for Fedora Scientific. Additionally, Node.js is now updated to Node.js 10.x as its default Node.js interpreter, Python 3.6 is updated to Python 3.7, and Ruby on Rails is updated to 5.2. Fedora Modularity Modularity gives you the option to install additional versions of software on independent life cycles. You no longer have to make your whole OS upgrade decisions based on individual package versions. It will allow you to keep your OS up-to-date while keeping the right version of an application, even when the default version in the distribution changes. These are the advantages it comes with: Moving fast and slow Different users have different needs, for instance, while developers want the latest versions possible, system administrators prefer stability for a longer period of time. With Fedora Modularity, as per your use case, you can make some parts to move slowly, and other parts to move faster by choosing between latest release or stability. Automatically rebuild containers Many containers are built manually and are not actively maintained. Also, very often they are not patched with security fixes but are still used by many people. To allow maintaining and building multiple versions, Modularity comes with an environment for packagers. These containers get automatically rebuilt every time the packages get updated. Automating packager workflow Often, Fedora contributors have to maintain their packages in multiple branches. As a result, they have to perform a series of manual steps during the build process. Modularity allows packagers to maintain a single source for multiple outputs and brings an additional automation to the package build process. Fedora Silverblue This release introduces the newly named Fedora Silverblue, formerly known as Fedora Atomic Workstation. It provides atomic upgrades, easy rollbacks, and workflows that are familiar from OSTree-based servers. Additionally, it delivers desktop applications as Flatpaks. This gives better isolations and solves longstanding issues with using yum/dnf for desktop applications. GNOME 3.30 The default desktop environment of Fedora 29 is based on GNOME 3.30. This version of GNOME comes with improved desktop performance and screen sharing. It supports automatic updates for Flatpak, a next-generation technology for building and distributing applications on Linux. Read the full announcement of Fedora 29 release on its official website. Swift is now available on Fedora 28 Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more
Read more
  • 0
  • 0
  • 9862
article-image-apple-t2-security-chip-has-touch-id-security-enclave-hardware-to-prevent-microphone-eavesdropping-amongst-many-other-features
Melisha Dsouza
31 Oct 2018
4 min read
Save for later

Apple T2 security chip has Touch ID, Security Enclave, hardware to prevent microphone eavesdropping, amongst many other features!

Melisha Dsouza
31 Oct 2018
4 min read
Apple’s special event held in Brooklyn yesterday, saw the unveiling of a host of new hardware and software including the MacBook Air 2018 and the Mac mini. Along with this, Apple also published a complete security overview white paper that minutely lists the details of its T2 security chip incorporated into the  Mac mini and MacBook Air. The chip disconnects the device’s microphone when the laptop is closed. It also prevents tampering of data while introducing a strict level of security for its devices. Let’s look at features of this chip that caught our attention. #1 Disabling the microphone on closing the laptop One of the major features of the T2 chip is disconnecting the device’s microphone when the laptop is closed. The chip first introduced in last year's iMac Pro, is upgraded to prevent any kind of malware from eavesdropping on a user’s conversation once the laptop’s lid is shut. Apple further notes that the camera is not disabled because, the field of view of the lens is completely obstructed while the lid is closed #2 Security Enclave The Secure Enclave is a coprocessor incorporated within the system on chip (SoC) of the Apple T2 Security Chip. IIt provides dedicated security by protecting the necessary cryptographic keys for FileVault and secure boot. What's more? It processes fingerprint data from the Touch ID sensor and checks if a match is present. Apple further mentions that its limited function is a virtue: “Security is enhanced by the fact that the hardware is limited to specific operations.” #3 Storage Encryption The Apple T2 Security Chip has a dedicated AES crypto engine built into the DMA path between the flash storage and main system memory. It makes it really efficient to perform internal volume encryption using FileVault with AES-XTS . The Mac unique ID (UID) and a device group ID (GID) are AES 256-bit keys included in the Secure Enclave during manufacturing. It is designed in such a way that no software or firmware can read the keys directly. The keys can be used only by the AES engine dedicated to the Secure Enclave. The UID is unique to each device and is generated completely within the Secure Enclave rather than in a manufacturing system outside of the device. Hence, the UID key isn’t available for access or storage by Apple or any Apple suppliers. Software that is run on the Secure Enclave takes advantage of the UID to protect Touch ID data, FileVault class keys, and the Keychain. #4 Touch ID The T2 chip processes the data from the Touch ID to authenticate a user. The Touch ID is a mathematical representation of the fingerprint which is encrypted and stored on the device. It is then protected with a key available only to the Secure Enclave which is used to  verify a match with the enrolled information. The data cannot be accessed by macOS or by any apps running on it and is never stored on Apple servers, nor is it backed up to iCloud. Thus ensuring that only authenticated users can access the device. #5 Secure Boot The T2 Security Chip ensures that each step of the startup process contains components that cryptographically signed by Apple to verify integrity. The boot process proceeds only after verifying the integrity of the software at every step. When a Mac computer with the T2 chip is turned on, the chip will execute code from read-only memory known as the Boot ROM. This unchangeable code, referred to as the hardware root of trust, is laid down during chip fabrication and audited for vulnerabilities to ensure all-round security of the process. These robust features of the T2 chip is definitely something to watch out for. You can read the whitepaper to understand more about the chip’s features. Apple and Amazon take punitive action against Bloomberg’s ‘misinformed’ hacking story Apple now allows U.S. users to download their personal data via its online privacy data portal Could Apple’s latest acquisition yesterday of an AR lens maker signal its big plans for its secret Apple car?
Read more
  • 0
  • 0
  • 16431

article-image-google-adanet-a-tensorflow-based-automl-framework
Sugandha Lahoti
31 Oct 2018
3 min read
Save for later

Google AdaNet, a TensorFlow-based AutoML framework

Sugandha Lahoti
31 Oct 2018
3 min read
Google researchers have come up with a new AutoML framework, which can automatically learn high-quality models with minimal expert intervention. Google AdaNet is a fast, flexible, and lightweight TensorFlow-based framework for learning a neural network architecture and learning to ensemble to obtain even better models. How Google Adanet works? As machine learning models increase in number, Adanet will automatically search over neural architectures, and learn to combine the best ones into a high-quality model. Adanet implements an adaptive algorithm for learning a neural architecture as an ensemble of subnetworks. It can add subnetworks of different depths and widths to create a diverse ensemble, and trade off performance improvement with the number of parameters. This saves ML engineers the time spent selecting optimal neural network architectures. Source: Google Adanet: Built on Tensorflow AdaNet implements the TensorFlow Estimator interface. This interface simplifies machine learning programming by encapsulating training, evaluation, prediction and export for serving. Adanet also integrates with open-source tools like TensorFlow Hub modules, TensorFlow Model Analysis, and Google Cloud’s Hyperparameter Tuner. TensorBoard integration helps to monitor subnetwork training, ensemble composition, and performance. Tensorboard is one of the best TensorFlow features for visualizing model metrics during training. When AdaNet is done training, it exports a SavedModel that can be deployed with TensorFlow Serving. How to extend AdaNet to your own projects Machine learning engineers and enthusiasts can define their own AdaNet adanet.subnetwork.Builder using high level TensorFlow APIs like tf.layers. Users who have already integrated a TensorFlow model in their system can use the adanet.Estimator to boost model performance while obtaining learning guarantees. Users are also invited to use their own custom loss functions via canned or custom tf.contrib.estimator.Heads in order to train regression, classification, and multi-task learning problems. Users can also fully define the search space of candidate subnetworks to explore by extending the adanet.subnetwork.Generator class. Experiments: NASNet-A versus AdaNet Google researchers took an open-source implementation of a NASNet-A CIFAR architecture and transformed it into a subnetwork. They were also able to improve upon CIFAR-10 results after eight AdaNet iterations. The model achieves this result with fewer parameters: [caption id="attachment_23810" align="aligncenter" width="640"] Performance of a NASNet-A model versus AdaNet learning to combine small NASNet-A subnetworks on CIFAR-10[/caption] Source: Google You can checkout the Github repo, and walk through the tutorial notebooks for more details. You can also have a look at the research paper. Top AutoML libraries for building your ML pipelines. Anatomy of an automated machine learning algorithm (AutoML) AmoebaNets: Google’s new evolutionary AutoML
Read more
  • 0
  • 0
  • 17150

article-image-the-tech-monopoly-and-income-inequality-why-innovation-isnt-for-everyone
Amarabha Banerjee
30 Oct 2018
5 min read
Save for later

The tech monopoly and income inequality: Why innovation isn't for everyone

Amarabha Banerjee
30 Oct 2018
5 min read
“Capital is dead labor, which, vampire-like, lives only by sucking living labor, and lives the more, the more labor it sucks,” Karl Marx An explanation is due after the introductory statement. I am not going to whine about how capitalism is a necessary evil and socialism or similar ‘isms’ promise a better future. Let’s agree with one basic fact. We are living in one of the most peaceful and technologically advanced era of human civilization on earth (that we know of yet). No major worldwide wars have taken place for around 75 years. We have developed tech miracles, with smartphones and social media becoming the new age essential commodities. It would be naive on my part to start blaming the burgeoning social media business and the companies involved to the decreasing happiness index and the gaping hole in income of the rich and the poor. But then facts don’t lie. A recent study conducted by UC Berkely, (Chris Benner, Gabriela Giusta, Louise Auerhahn, Bob Brownstein, Jeffrey Buchanan) highlights the growing income inequality in Silicon Valley. Source: UC Berkley This chart shows the dip in income of employees belonging to different income brackets. The X-axis shows the percentile of total income earners. We can clearly see that the biggest dip has happened at the middle-income earner section (14.2%). The top tier has seen a rise of 1% and the bottom end is also pretty much stagnant at with a dip of 1%. The biggest impact has been on those slap bang in the middle of the average income bracket. This is particularly alarming because from 1997 to 2017 the owners of some of the planets biggest tech companies have seen a massive jump in their earnings. Companies like Amazon, Facebook, Google have earned enormous wealth and control over the tech landscape. Amazon’s Jeff Bezos is presently sitting on top of a $150 Billion fortune. The anticipation of the majority of the population was that tech will improve the average quality of life. It was believed that it would increase the minimum wage for the low rung workers, and improve the economic status of the middle class. The existing situation seems to point to the exact opposite direction. The reasons can be summed up as below: The competition is immense among the tech companies to survive. Hence profits are largely invested in R&D and in developing ‘better’ futuristic solutions - or new novel products for consumers and users. Advertisement and promotional campaigns have also become significant factors in the survival strategies. That’s why we don’t see companies building affordable housing for their workforce anymore, bonuses have become rare events. The money comes in and goes back into the wheel, the employees are given the reasoning that to survive, the company will have to innovate. The survival rate of start-ups are very low in the tech domain. Hence, more and more startups are shrinking their budget and ensuring that they can breach the profit margin early. To a certain extent this makes sense. But it is damaging for the people who join to explore new domains in their career. If the startup fails, they have to start afresh - and if it is even moderately successful they may find they are working astonishingly long hours for very little reward. The modern-day tech workforce is not organized in any manner. The top tech companies discourage their employees from creating any form of labour union. While activism at work has not been exactly a good influence, but the complete absence of it has often proved to be a disadvantage for the workforce, and is a particular disadvantage given the tumultuous conditions of working in the tech field. The race to reinvent the wheel even when the system is running at a decent progressive pace is what has brought the human civilization to its present state. Global wealth distribution is skewed in favor of the rich few as badly as it possibly can. Monopoly in the tech market is not helping the cause. The internet is slowly becoming a playground for rich kids who own everything, right from the ground itself to the tools to keep it in shape. The rules are determined by them. The frenzy over yearly new tech releases is so huge and so marketable, that people have stopped asking the fundamental question of it all - what’s actually new? This is what capitalism stood for during most of the 20th century - even if it wasn’t immediately clear as it is now. It made people believe that consumerism can make their daily lives better, it’s perfectly ok to let go of a few basic humanitarian values in the pursuit of wealth, and most importantly that everyone can achieve it if they work hard and put their heart and soul into it. Today, technology continues to show us such dreams. Artificial Intelligence and Machine Learning can make our lives better, self-driving cars should ease traffic issues, and halt the march of global warming. But just because we see these dreams, doesn’t mean they’re becoming true. For many people at the heart of this innovation should feel the positive impact of these changes in their own lives, not longer working hours and precarious employment. The constant urge to win the race shouldn’t make the rich richer and the poor poorer. If that pattern starts to emerge and is upheld in the next 4-5 years, then we can surely conclude that Capitalism 2.0 - the type of capitalism that benefits from vulnerability and not from the power and creativity we share  - has finally taken its full form. We might have only ourselves to blame for. OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Facebook finds ‘no evidence that hackers accessed third party Apps via user logins Is YouTube’s AI Algorithm evil?
Read more
  • 0
  • 0
  • 13472
article-image-react-conf-2018-highlights-hooks-concurrent-react-and-more
Bhagyashree R
30 Oct 2018
4 min read
Save for later

React Conf 2018 highlights: Hooks, Concurrent React, and more

Bhagyashree R
30 Oct 2018
4 min read
React Conf 2018 was held on October 25-26 in Henderson, Nevada USA. In this conference, the React team introduced Hooks, that allows you to use React without classes. On the second day, they spoke about time slicing, code-splitting, and introduced the React Cache and Scheduler APIs that we will see in the coming releases. Day 1: Unveiling Hooks React conf 2018 was kick-started by Sophie Alpert, Engineering Manager at Facebook. She highlighted that many big companies like Amazon, Apple, Facebook, Google are using React and that there has been a huge increase in npm downloads. React’s primary mission is to allow web developers to create great UIs. This is enabled by these three properties of React: Simplifying things that are difficult Focusing on performance Developer tooling But there are still few limitations in React that need to be addressed to achieve the mission React aims for. It doesn’t provide a stateful primitive that is simpler than class component. One of the earlier solutions to this was Mixins, but it has come to be known for introducing more problems than solving the problems. Here are the three limitations that were discussed in the talk: Reusing logic between multiple components: In React, sharing code is enabled by two mechanisms, which are higher-order components and render props. But to use them you will need to restructure your component hierarchy. Giant components: There are many cases when the components are simple but grow into an unmanageable mess of stateful logic and side effects. Also, very often we see the lifecycle methods ending up with a mix of unrelated logic. This makes it quite difficult to break these components into smaller ones because the stateful logic is all over the place. Confusing classes: Understanding classes in JavaScript is quite difficult. Classes in JavaScript work very differently from how they work in most languages. You have to remember to bind the event handlers. Also, classes make it difficult to implement hot-reloading reliably. In order to solve these problems in React, Dan Abramov introduced Hooks, followed by Ryan Florence demonstrating how to refactor an application to use them. Hooks allows you to “hook into” or use React state and other React features from function components. The biggest advantage is that Hooks don’t work inside classes and let you use React without classes. Day 2: Concurrent rendering in React On day 2 of the React Conf, Andrew Clark spoke about concurrent rendering in React. Concurrent rendering allows developers to invest less time thinking about code, and focus more on the user experience. But, what exactly is concurrent rendering? Concurrent rendering can work on multiple tasks at a time, switching between them according to their priority. With concurrent rendering, you can partially render a tree without committing the result to the DOM. It does not block the main thread and is designed to solve real-world problems commonly faced by UI developers. Concurrent rendering in React is enabled by the following: Time Slicing The basic idea of time slicing is to build a generic way to ensure that high-priority updates don't get blocked by a low-priority update. With time slicing the rendered screen is always consistent and we don’t see visual artifacts of slow rendering causing a poor user experience. These are the advantages time slicing comes with: Rendering is non-blocking Coordinate multiple updates at different priorities Prerender content in the background without slowing down visible content Code-splitting and lazy loading with lazy() and Suspense You can now render a dynamic import as a regular component with the React.lazy() function.  Currently, React.lazy only supports default exports. You can create an intermediate module to re-export a module that uses named exports. This ensures that tree-shaking keeps working and that you don’t pull in unused components. By the time a component renders, we must show some fallback content to the user, for example, a loading indicator. This is done using the Suspense component. It is a way for components to suspend rendering while they load async data. It allows you to pause any state update until the data is ready, and you can add async loading to any component deep in the tree without plumbing all the props and state through your app and hoisting the logic. The latest React 16.6 comes with these two features, that is, lazy and Suspense. Hooks was recently released with React 16.7-alpha. In the coming releases, we will see two new APIs called React Cache and Scheduler. You can watch the amazing demos by the React developers, to understand these new concepts in more detail. React introduces Hooks, a JavaScript function to allow using React without classes React 16.6.0 releases with a new way of code splitting, and more! InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out
Read more
  • 0
  • 0
  • 16092

article-image-twitter-plans-to-disable-the-like-button-to-promote-healthy-conversations-should-retweet-be-removed-instead
Savia Lobo
30 Oct 2018
4 min read
Save for later

Twitter plans to disable the ‘like’ button to promote healthy conversations; should retweet be removed instead?

Savia Lobo
30 Oct 2018
4 min read
Yesterday, Twitter’s CEO Jack Dorsey announced that the popular social media platform might eliminate its heart-shaped like button, according to The Telegraph. The Twitter communications team further clarified in a tweet, “eliminating the like button is a ‘commitment to healthy conversation,’ it was ‘rethinking everything about the service,’  including the like button”. At the Wired25 summit held on the 15th of October, Dorsey made an onstage remark questioning the “like” button’s worth in facilitating meaningful communication. He said, “Is that the right thing? Versus contributing to the public conversation or a healthy conversation? How do we incentivize healthy conversation?” Twitter has also vowed to “increase the collective health, openness, and civility of the dialogue on our service”, in their blog post in July. Prior to this, the company had also introduced ‘Bookmarks’, an easy way to save Tweets for quick access later without having to like them. Ben Grosser, an artist, and professor at University of Illinois, says “I fear that if they remove the Like button the fact that there are other indicators that include metrics will just compel users to use those other indicators.” A Twitter spokesperson told the Telegraph, “At this point, there is no specific timeline for changes or particular planned changes to discuss”. He added, “We’re experimenting and considering numerous possible changes, all with an eye toward ensuring we’re incentivizing the right behaviors to drive a healthy conversation.” Should Retweet be eliminated instead? The Atlantic speculates that “If Twitter really wants to control the out-of-control rewards mechanisms it has created, the retweet button should be the first to go.” Retweets and not likes are Twitter’s most powerful method of reward, according to The Atlantic. More the retweets for the post, more it is likely to get viral on social media. According to MIT research, Twitter users retweet fake news almost twice as much as real news. Other Twitter users, desperate for validation, endlessly retweet their own tweets, spamming followers with duplicate information. Twitter introduced retweets to ensure that the most interesting and engaging content would show up in the feed and keep users entertained. The tweets shown on the platform are a result of an algorithmic accounting of exactly what the most interesting and engaging content is. In April, Alexis Madrigal wrote about how he used a script to eliminate retweets from his timeline and how it transformed his experience for the better. “Retweets make up more than a quarter of all tweets. When they disappeared, my feed had less punch-the-button outrage,” he wrote. “Fewer mean screenshots of somebody saying precisely the wrong thing. Less repetition of big, big news. Fewer memes I’d already seen a hundred times. Less breathlessness. And more of what the people I follow were actually thinking about, reading, and doing. It’s still not perfect, but it’s much better.” This week, Alexis along with Darshil Patel and Maas Lalani, two 18-year-old college freshers, launched a browser extension that hides the number of retweets, likes, and followers on all tweets in users feed. Elimination of the native retweet buttons will definitely refrain people from quote tweeting. According to The Atlantic, “it could just send everyone back to the dark ages of the manual retweet when users physically copy-pasted text from another tweet with the letters “RT” plastered in front. But killing native retweets is certainly a step in the right direction.” For a complete coverage of this news, head over to The Telegraph. Social media platforms, Twitter and Gab.com, accused of facilitating recent domestic terrorism in the U.S. Twitter prepares for mid-term US elections, with stronger rules and enforcement approach to fight against fake accounts and other malpractices Twitter on the GDPR radar for refusing to provide a user his data due to ‘disproportionate effort’ involved
Read more
  • 0
  • 0
  • 10952
Modal Close icon
Modal Close icon