Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-reinforcement-learning-model-optimizes-brain-cancer-treatment-reduces-dosing-cycles-and-improves-patient-quality-of-life
Melisha Dsouza
13 Aug 2018
6 min read
Save for later

Reinforcement learning model optimizes brain cancer treatment, reduces dosing cycles and improves patient quality of life

Melisha Dsouza
13 Aug 2018
6 min read
Researchers at MIT have come up with an intriguing approach to combat ‘Glioblastoma’- a malignant tumor of the brain/spinal cord- using machine learning techniques. By reducing the toxic chemotherapy and radiotherapy that is involved in treating this cancer, the researchers aim to improve the quality of life for patients, while also reducing the various side effects caused by the former using Reinforcement learning techniques. While the prognosis for adults is no more than 5 years, medical professionals try to shrink the tumor by administering drug doses in safe amounts. However, the pharmaceuticals are so strong that patients end up suffering from their side effects. Enter Machine Learning and Artificial Intelligence to save the day. While it's no hidden truth that machine learning is being incorporated into healthcare on a huge scale, the MIT researchers have taken this to the next level. Using Reinforcement Learning as the Big Idea to train the model Media Lab researcher Gregory Yauney will be presenting a paper next week at the 2018 Machine Learning for Healthcare conference at Stanford University. This paper details how the MIT Media Lab researchers have come up with a model that could make dosing cycles less toxic but still effective. Incorporating a “self-learning” machine-learning technique, the model studies treatment regimens being used presently, and iteratively changes the measurements. In the end, it finds an ideal treatment design suited to the patient. This has proven to reduce the tumor sizes to a degree almost identical to that of original medical regimens. The model simulated trials of 50 patients and designed treatments that either reduced dosages to twice a year or skipped them all together. This was done keeping in mind that the model has to shrink the size of the tumor but at the same time ensuring that reduced dosages did not lead to harmful side effects. The model is designed to used reinforced learning (RL)- that comprises artificially intelligent “agents” that complete “actions” in an unpredictable, complex environment to reach the desired outcome. The model’s agent goes through traditionally administered regimens. It uses a combination of the drugs temozolomide (TMZ) and procarbazine, lomustine, and vincristine (PVC), administered to the patients  over weeks or months. These regimens are based on protocols that have been used clinically for ages and are based on both, animal testing and various clinical tests and scenarios. The protocols are then used by Oncologists to predict how many doses the patients have to be administered based on weight. As the model explores the regimen, it decides on one of the two actions- Initiate a dose Withhold a dose If it does administer a dose, it has to make the decision if the patient needs the entire dose, or only a portion. After a decision is taken, the model checks with another clinical model to see if the tumor’s size has changed or if it’s still the same. If the tumor’s size has reduced, the model receives a reward else it is penalised. Rewards and penalties essentially are positive and negative numbers, say +1 or – 1. The researchers also had to ensure that the model does not over-dose or give out the maximum number of doses to reduce the mean diameter of the tumor. Therefore, the model is programmed in such a way that whenever it chooses to administer all full doses, it gets penalized. Thus the model is forced to administer fewer, smaller doses. Patik Shah, a principal investigator at the Media Lab who supervised this research, further stresses on the fact that, as compared to traditional RL models that work toward a single outcome, such as winning a game, and take any and all actions that maximize that outcome, the model implemented by the MIT researchers is a  “unorthodox RL model that weighs potential negative consequences of actions (doses) against an outcome (tumor reduction)” The model is strikingly wired to find a dose that does not necessarily maximize tumor reduction, but also establishes a perfect balance between maximum tumor reduction and low toxicity for the patients. The training and testing methodology used The model was trained on 50 simulated patients -  randomly selected from a large database of glioblastoma patients. These patients had previously undergone traditional treatments. The model conducted about 20,000 trial-and-error test runs for every patient. Once training was complete, the model understood the parameters for optimal regimens. The model was then tested on 50 new simulated patients and used the above-learned parameters to formulate new regimens based on various constraints that the researchers provided. The models treatment regimen was compared to the results of a conventional regimen using both TMZ and PVC. The outcome obtained was practically similar to the results obtained after the human counterparts administered treatments. The model was also able to treat each patient individually, as well as in a single cohort, and achieved similar results (medical data for each patient was available to the researchers). In short, the model has helped to generate precision medicine-based treatments by conducting one-person trials using unorthodox machine-learning architectures. Nicholas J. Schork, a professor and director of human biology at the J. Craig Venter Institute, and an expert in clinical trial design explains  “Humans don’t have the in-depth perception that a machine looking at tons of data has, so the human process is slow, tedious, and inexact,” he further adds  “Here, you’re just letting a computer look for patterns in the data, which would take forever for a human to sift through, and use those patterns to find optimal doses.” To sum it all up,  Machine learning is again proving to be an essential asset in the medical field- helping both researchers as well as patients to view medical treatments in an all new perspective. If you would like to know more about the progress done so far, head over to MIIT news. 23andMe shares 5mn client genetic data with GSK for drug target discovery Machine learning for genomics is bridging the gap between research and clinical trials 6 use cases of Machine Learning in Healthcare
Read more
  • 0
  • 0
  • 13738

article-image-it-is-supposedly-possible-to-increase-reproducibility-from-54-to-90-in-debian-buster
Melisha Dsouza
06 Mar 2019
2 min read
Save for later

It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster!

Melisha Dsouza
06 Mar 2019
2 min read
Yesterday, Holger Levsen, a member of the team maintaining reproducible.debian.net, started a discussion on reproducible builds, stating that “Debian Buster will only be 54% reproducible (while we could be at >90%)”. He started off by stating that tests indicate Debian Buster’s 26476 source packages (92.8%) out of 28523 source packages in total can be built reproducibly in buster/amd64. The 28523 source packages build 57448 binary packages. Next, by looking at binary packages that Debian actually distributes, he says that Vagrant came up with an idea to check buildinfo.debian.net for .deb files for which there exists 2 or more .buildinfo. Turning this into a Jenkins job, he checked the above idea for all 57448 binary packages (including downloading all those .deb files from ftp.d.o)  in amd64/buster/main. He obtained the following results: reproducible packages in buster/amd64: 30885: (53.7600%) unreproducible packages in buster/amd64: 26543: (46.2000%) and reproducible binNMUs in buster/amd64: 0: (0%) unreproducible binNMU in buster/amd64: 7423: (12.9200%) He suggests that binNMUs are unreproducible because of their design and his proposed solution to obtain reproducible nature is that 'binNMUs should be replaced by easy "no-change-except-debian/changelog-uploads'. This means a 12% increase in reproducibility from 54%. Next, he also discovered that 6804 source packages need a rebuild from December 2016. This is because these packages were built with an old dpkg not producing .buildinfo files. 6804 of 28523 accounts for 23.9%. Summing everything up- 54%+12%+24% equals 90% reproducibility. Refer to the entire discussion thread for more details on this news. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable User discovers bug in debian stable kernel upgrade; armmp package affected Debian 9.7 released with fix for RCE flaw
Read more
  • 0
  • 0
  • 13738

article-image-five-developer-centric-sessions-at-iot-world-2018
Savia Lobo
22 May 2018
6 min read
Save for later

Five developer centric sessions at IoT World 2018

Savia Lobo
22 May 2018
6 min read
Internet of Things has shown a remarkable improvement over the years. The basic IoT embedded devices with sensors, have now advanced to a level where AI can be deployed into IoT devices to make them smarter. The IoT World 2018 conference was held from May 14th to 17th, at Santa Clara Convention Center, CA, USA. Al special developer centric conference designed  specifically for technologists was also part of the larger conference. The agenda for the developers’ conference was to bring together the technical leaders who have contributed with their innovations in the IoT market and tech enthusiasts who look forward to develop their careers in this domain.This conference also included sessions such as SAP learning, and interesting keynotes on Intelligent Internet of things. Here are five sessions that caught our eyes at the developer conference in IoT World 2018. How to develop Embedded Systems by using the modern software practices, Kimberly Clavin Kimberly Clavin  highlighted that a major challenge in developing autonomous vehicles include, system integration and validation techniques. These techniques are used to ensure quality factor within the code. There are a plethora of companies that have software as their core and use modern software practices such as (Test Driven Development)TDD and Continuous Integration(CI) for successful development. However, the same tactics cannot be directly implemented within the embedded environment. Kimberly presented ways to adapt these modern software practices for use within the development of embedded systems. This can help developers to create systems that are fast, scalable, and a cheaper. The highlights of this session include, Learning to test drive an embedded component. Understanding how to mock out/simulate an unavailable component. Application of Test Driven Development (TDD), Continuous Integration (CI) and mocking for achieving a scalable software process on an embedded project. How to use Machine Learning to Drive Intelligence at the Edge, Dave Shuman and Vito De Gaetano Edge IoT is gaining a lot of traction of late. One way to make edge intelligent is by building the ML models on cloud and pushing the learning and the models onto the edge. This presentation session by Dave Shuman and Vito De Gaetano, shows how organizations can push intelligence to the edge via an end-to-end open source architecture for IoT. This end-to-end open source architecture for IoT is purely based on Eclipse Kura and Eclipse Kapua. Eclipse Kura is an open source stack for gateways and the edge, whereas Eclipse Kapua is an open source IoT cloud platform. The architecture can enable: Securely connect and manage millions of distributed IoT devices and gateways Machine learning and analytics capabilities with intelligence and analytics at the edge A centralized data management and analytics platform with the ability to build or refine machine learning models and push these out to the edge Application development, deployment and integration services The presentation also showcased an Industry 4.0 demo, which highlighted how to ingest, process, analyze data coming from factory floors, i.e from the equipments and how to enable machine learning on the edge using this data. How to build Complex Things in a simplified manner, Ming Zhang Ming Zhang put forth a simple question,“Why is making hardware so hard?” Some reasons could be: The total time and cost to launch a differentiated product is prohibitively high because of expensive and iterative design, manufacturing and testing. System form factors aren’t flexible -- connected things require richer features and/or smaller sizes. There’s unnecessary complexity in the manufacturing and component supply chain. Designing a hardware is a time-consuming process, which is cumbersome and not a fun task for designers, unlike software development. Ming Zhang showcased a solution, which is ‘The zGlue ZiPlet Store’ -- a unique platform wherein users can build complex things with an ease. The zGlue Integrated Platform (ZiP) simplifies the process of designing and manufacturing devices for IoT systems and provides a seamless integration of both hardware and software on a modular platform. Building IoT Cloud Applications at Scale with Microservices, Dave Chen This presentation by Dave Chen includes how DConnectivity, big data, and analytics are transforming several business types. A major challenge in the IIoT sector is the accumulation of humongous data. This data is generated by machineries and industrial equipments such as wind turbines, sensors, and so on. Valuable information out of this data has to be extracted securely, efficiently and quickly. The presentation focused on how one can leverage microservice design principles and other open source platforms for building an effective IoT device management solution in a microservice oriented architecture. By doing this, managing the large population of IoT devices securely becomes easy and scalable. Design Patterns/Architecture Maps for IoT Design patterns are the building blocks of architecture and enable developers and architects to reuse solutions to common problems. The presentation showcased how various common design patterns for connected things, common use cases and infrastructure, can accelerate the development of connected device. Extending Security to Low Complexity IoT Endpoint Devices, Francois Le At present, there are millions of low compute, low power IoT sensors and devices deployed. These devices and sensors are predicted to multiply to billions within a decade. However, these devices do not have any kind of security even though they hold such crucial, real-time information. These low complexity devices have: Very limited onboard processing power, Less memory and battery capacity, and are typically very low cost. Low complexity IoT devices cannot work similar to IoT edge device, which can easily handle validation and encryption techniques, and also have huge processing power to handle multiple message exchanges used for authentication. The presentation states that a new security scheme needs to be designed from the ground up. It must acquire lesser space on the processor, and also have a low impact on battery life and cost. The solution should be: IoT platform agnostic and Easy to implement by IoT vendors, Easily operated over any wireless technologies (e,g, Zigbee, BLE, LoRA, etc.) seamlessly Transparent to the existing network implementation. Automated and scalable for very high volumes, Evolve with new security and encryption techniques being released Last for a long time in the field with no necessity to update the edge devices with security patches. Apart from these, many other presentations were showcased at the IoT World 2018 for developers. Some of them include, Minimize Cybersecurity Risks in the Development of IoT Solutions, Internet of Things (IoT) Edge Analytics: Do's and Don’ts. Read more keynotes presented at this exciting IoT World conference 2018 on their official website. Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT How IoT is going to change tech teams AWS Greengrass brings machine learning to the edge    
Read more
  • 0
  • 0
  • 13737

article-image-facebook-twitter-takes-down-hundreds-of-fake-accounts-with-ties-to-russia-and-iran-suspected-to-influence-the-us-midterm-elections
Melisha Dsouza
24 Aug 2018
4 min read
Save for later

Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections

Melisha Dsouza
24 Aug 2018
4 min read
"Authenticity matters and people need to be able to trust the connections they make on Facebook." -Mark Zuckerberg After Facebook announced last month that it had identified suspicious accounts that were engaged in "coordinated inauthentic behavior," it successfully took down 652 fake accounts and pages that published political content. Facebook had then declined to specify which country or countries may have been leading the campaign, but officials said the campaign was consistent with previous Russian attacks. These pages were suspected to have been intended to influence the US midterm elections set to take place in November this year. The campaigns were first discovered by FireEye, a cybersecurity firm that worked with Facebook on investigating the fake pages and accounts. Earlier this week, Facebook confirmed in a blog post that these campaigns had links to Russia and Iran. The existence of the fake accounts was first reported by The New York Times. Taking down Inauthentic Behaviour The conspiracy started unravelling in July,  when FireEye tipped Facebook off to the existence of a network of pages known as “Liberty Front Press”. The network included 70 accounts, three Facebook groups, and 76 Instagram accounts, which had 155,000 Facebook followers and 48,000 Instagram followers. The network had undisclosed links to Iranian state media, Facebook said, and spent more than $6,000 between 2015 and today. The network also hosted three events. On investigating those pages, it was found that they linked them back to Iranian state media using website registration information and internet protocol addresses. Pages created in 2013, posted political content that was focused on the Middle East, Latin America, Britain and the United States. Other fake pages also had a far more international spread than the earlier batches uncovered. They carried a number of pro-Iranian themes. The aim of the pages also included promoting Palestinians. Some included anti-Trump language and were tied to relations between the United States and Iran, including references to the Iranian nuclear weapons deal. Newer accounts, created in 2016 targeted cybersecurity by spreading malware and stealing passwords. The accounts that originated in Russia focused on activity in Ukraine and Syria. They did not appear to target the United States. But the aim of the latest campaigns can be summed up to be on similar lines as to those of past operations on the social network. Mainly to distribute fake news that might cause confusion among people, as well as to alter people’s thinking to become more biased or pro-government on various issues. Mark Zuckerberg, Facebook’s chief executive, officially made a statement in a conference call late Tuesday saying, “We believe these pages, groups, and accounts were part of two sets of campaigns, One from Iran, with ties to state-owned media. The other came from a set of people the U.S. government and others have linked to Russia.” Closely following suit, Twitter also went ahead and suspended 284 accounts for engaging in coordinated manipulation. Their analysis supports the theory that many of these accounts originated from Iran. Another social media giant, YouTube, deleted a channel called ‘Liberty Front Press’, which was a website linked to some of the fake Iranian accounts on Facebook. This was done because the account violated its community guidelines. Facebook has come under heavy audit for how its policies are exploited by third parties for fake news, propaganda, and other malicious activity especially after the debacle of the coordinated election interference from Russia’s IRA before, during, and after the 2016 US election. The criticism has only aggravated as the US heads toward the midterms. Facebook has been making an effort to prepare its products and moderation strategy for any manipulation. Now Facebook has taken a step further and is working with researchers to study social media-based election interference. The social media giant hopes to understand how this interference functions and to find ways to stop it. Read the the new york times post for further analysis of this evolving situation. Facebook and NYU are working together to make MRI scans 10x faster Four 2018 Facebook patents to battle fake news and improve news feed Facebook is investigating data analytics firm Crimson Hexagon over misuse of data  
Read more
  • 0
  • 0
  • 13734

article-image-facebook-watch-is-now-available-world-wide-challenging-video-streaming-rivals-youtube-twitch-and-more
Bhagyashree R
30 Aug 2018
3 min read
Save for later

Facebook Watch is now available world-wide challenging video streaming rivals, YouTube, Twitch, and more

Bhagyashree R
30 Aug 2018
3 min read
Yesterday, Facebook made its video-streaming service named, Facebook Watch globally available. It was first launched in August 2017 for a limited group of people in the US. Facebook Watch's content is produced by its partners, who can earn 55% of advertising revenue while Facebook keeps 45%. How Facebook Watch is different from other streaming rivals like YouTube, Twitch and more? Facebook believes that Watch is unique as compared to its rivals, such as YouTube, Amazon’s Twitch, Netflix because it has an added advantage of how Watch helps viewers interact with each other. Fidji Simo, Facebook’s vice-president of video, told BBC: “It is built on the notion that watching video doesn’t have to be a passive experience. You can have a two-way conversation about the content with friends, other fans or even the creators themselves.” Facebook Watch comes with a feature called Watch Party that lets its users to coordinate themselves to watch a show together. Creators can boost engagement with the help of Interactivity Platform that allows them to run polls, challenges, and quizzes. How will it support its content creators? Facebook has laid out a plan to support their publishers and content creators in two main areas: Ad breaks to generate revenue from their videos Creator Studio to understand how their content is performing Ad breaks eligibility criteria and availability Ad breaks are launched across four markets and is only available to pages that publish videos in certain languages and countries right now. It will support more countries and languages by the end of the year and in 2019. Eligibility: Your videos should be 3-minute long Videos that have generated more than 30,000 1-minute views in total over the past two months Pages should have at least 10,000 Facebook followers Meet their Monetisation Eligibility standards Should be located in a country where ad breaks are available Availability: Currently, ad breaks are supported in the US, UK, Ireland, New Zealand and Australia. Over the next few months, availability will further expand to more countries and languages. Manage your video content with Creator Studio Creator Studio provide creators a central place for Pages to manage their entire content library and business. You can do the following: Manage content and interactions: Look through the insights, manage interactions across all owned Pages, respond to Facebook messages or comments on Facebook and Instagram. Streamline video publishing: Compose, schedule, and publish content across owned Pages and also do bulk uploads. Access ad breaks: Review monetisation insights and view payments. Along with this, you can access Rights Manager, use sound collection, and take advantage of new features and monetisation opportunities that they may be eligible for. To know more about the recent updates and your eligibility on Facebook Watch, check out their official announcement. A new conservative employee group within Facebook to protest Facebook’s “intolerant” liberal policies Facebook bans another quiz app and suspends 400 more due to concerns of data misuse Facebook is reportedly rating users on how trustworthy they are at flagging fake news
Read more
  • 0
  • 0
  • 13730

article-image-walmart-deploy-thousands-of-robots-in-5000-stores-across-us
Fatema Patrawala
12 Apr 2019
4 min read
Save for later

Walmart to deploy thousands of robots in its 5000 stores across US

Fatema Patrawala
12 Apr 2019
4 min read
Walmart, the world’s largest retailer following the latest tech trend is going all in on robots. It plans to deploy thousands of robots for lower level jobs in its 5000 of 11, 348 stores in US. In a statement released on its blog on Tuesday, the retail giant said that it was unleashing a number of technological innovations, including autonomous floor cleaners, shelf-scanners, conveyor belts, and "pickup towers" on stores across the United States. Elizabeth Walker from Walmart Corporate Affairs says, “Every hero needs a sidekick, and some of the best have been automated. Smart assistants have huge potential to make busy stores run more smoothly, so Walmart has been pioneering new technologies to minimize the time an associate spends on the more mundane and repetitive tasks like cleaning floors or checking inventory on a shelf. This gives associates more of an opportunity to do what they’re uniquely qualified for: serve customers face-to-face on the sales floor.” Further Walmart announced that it would be adding 1,500 new floor cleaners, 300 more shelf-scanners, 1,200 conveyor belts, and 900 new pickup towers. It has been tested in dozens of markets and hundreds of stores to prove the effectiveness of the robots. Also, the idea of replacing people with machines for certain job roles will reduce costs for Walmart. Perhaps if you are not hiring people, they can't quit, demand a living wage, take sick days off etc resulting in better margins and efficiencies. According to Walmart CEO Doug McMillon, “Automating certain tasks gives associates more time to do work they find fulfilling and to interact with customers. Continuing this logic, the retailer points to robots as a source of greater efficiency, increased sales and reduced employee turnover.” "Our associates immediately understood the opportunity for the new technology to free them up from focusing on tasks that are repeatable, predictable and manual," John Crecelius, senior vice president of central operations for Walmart US, said in an interview with BBC Insider. "It allows them time to focus more on selling merchandise and serving customers, which they tell us have always been the most exciting parts of working in retail." With the war for talent raging on in the world of retail and the demand for minimum wage hikes a frequent occurrence, Walmart's expanding robot army is a signal that the company is committed to keeping labor costs down. Does that mean at the cost of cutting jobs or employee restructuring? Walmart has not specified what number of jobs it will cut as a result of this move. But when automation takes place and at the largest retailer in the US is Walmart, significant job losses can be expected to hit. https://twitter.com/NoelSharkey/status/1116241378600730626 Early last year, Bloomberg reported that Walmart is removing around 3500 store co-managers, a salaried role that acts as a lieutenant underneath each store manager. The U.S. in particular has an inordinately high proportion of employees performing routine functions that could be easily automated. As such, retail automation is bound to hit them the hardest. With costs on the rise, and Amazon as a constant looming threat that has resulted in the closing of thousands of mom-and-pop stores across the US, it was inevitable that Walmart would turn to automation as a way to stay competitive in the market. As the largest retail employer in the US, transitions to an automated retailing model, it will leave a good proposition of the 7,04,000 strong US retail workforce either unemployed, underemployed or unready to transition into other jobs. How much Walmart assists its redundant workforce to transition to another livelihood will be litmus test to its widely held image of a caring employer in contrast to Amazon’s ruthless image. How Rolls Royce is applying AI and robotics for smart engine maintenance AI powered Robotics : Autonomous machines in the making Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics
Read more
  • 0
  • 0
  • 13729
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-chrome-will-soon-support-lazyload-a-solution-to-lazily-load-below-the-fold-images-and-iframes
Bhagyashree R
09 Apr 2019
2 min read
Save for later

Google Chrome will soon support LazyLoad, a solution to lazily load below-the-fold images and iframes

Bhagyashree R
09 Apr 2019
2 min read
Google Chrome will soon support something called LazyLoad, a feature that allows browsers to delay the loading of out-of-view images and iframes until the user scrolls near them, shared Scott Little, a Chromium developer yesterday. Why LazyLoad is introduced? Very often, web pages have images and other embedded content like ads placed below the fold and users don’t always end up scrolling all the way down. LazyLoad tries to take the advantage of this behavior to optimize the web browser by loading the important content much faster and hence reducing the network data and memory usage. LazyLoad waits to load images and iframes that are out of view until the user scrolls near them. It is up to the browser to decide exactly how “near”, but it should typically start loading the out-of-view content some distance before the content comes in view. Currently, there are few JavaScript libraries that can be used for lazy loading images or other kinds of content. But, natively supporting such feature in the browser itself will make it easier for websites to take advantage of lazy loading. Additionally, with this feature browsers will be able to automatically find and load content that are suitable for lazy loading. The LazyLoad solution will be supported on all platforms. Web pages just need to use loading="lazy" on the img and iframe elements. For Android Chrome users who have Data Saver turned on, elements with loading="auto" or unset will also be lazily loaded if Chrome finds them to be good candidates for lazy loading based on heuristics. If you set loading="eager" on the image or iframe element they will not be lazily loaded. To read more in detail about LazyLoad, check out its GitHub repository. Google’s Cloud Healthcare API is now available in beta Ian Goodfellow quits Google and joins Apple as a director of machine learning Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members  
Read more
  • 0
  • 0
  • 13729

article-image-ledgerconnect-a-blockchain-app-store-by-ibm-cls-barclays-citi-and-7-other-banks-in-the-trials
Prasad Ramesh
07 Aug 2018
3 min read
Save for later

LedgerConnect: A blockchain app store by IBM, CLS, Barclays, Citi and 7 other banks is being trialled

Prasad Ramesh
07 Aug 2018
3 min read
Blockchain is an open decentralized database. It's the underlying technology for the popular cryptocurrency, Bitcoin. Now, banks and other financial institutions want to apply it to financial transactions. The recently launched IBM Blockchain platform, LedgerConnect is aimed at the financial industry and banking sectors. What is IBM blockchain? IBM, with CLS, a foreign exchange financial group, launched LedgerConnect a week ago. It is a proof of concept blockchain platform designed for companies that provide financial services. Its aim is to apply blockchain technology to a number of challenge areas that are currently not that fast. They also have challenges like tracking a paperwork trail, know your customer (KYC) processes, collateral management, and trades. So far, nine financial companies, including banks like Barclays and Citi, are involved in the trials. With everything being online, delays will likely be reduced by a large margin. According to an IBM report: “With IBM Blockchain, banks can create secure, low-cost and high volume cross-border payments without sacrificing margins.” Why is it important? The IBM website states that 91% of banks are investing in blockchain solutions by 2018. Also, 66% of institutions expect to be in production and running at scale with blockchain. All the transactions being on the same network makes everything much easier. An added advantage is that blockchain is known to be tamper-proof. So the overall security and speed will increase a lot. While the blockchain used in Bitcoin is public, the one used in large companies would be private. Makes sense as you wouldn’t want everyone’s banking and transaction info publically available everywhere. To further understand why blockchains are great for banking applications, read our article, 15 ways to make Blockchains scalable, secure and safe!. Impact of blockchain in banking and finance In the current day, banks work with manual transactions and a considerable amount of paperwork. So transactions like trades and loans take a lot of time, amounting to delays. Cross-border payments are costly and time-consuming. Moreover, making customers provide identification frequently is tedious and decreases customer satisfaction. IBM blockchain aims to address all these pain areas. As the platform is online, blockchain based smart-contracts automatically store information about the transactions in real-time. As a result, there won’t be discrepancies in the data shown in different locations. Transactions on blockchain are identified uniquely with a hash. KYC info will also be more secure since it will be stored in the blockchain that is known for being tamper-proof. Source: IBM infographic Source: IBM infographic LedgerConnect tries to create a network of multiple providers that can provide their applications to be deployed on this platform. Large banks and organizations can come and use those applications creating an app store like ecosystem. The applications in the LedgerConnect “store” are built on Hyperledger Fabric. This project is expected to launch in early 2019 but needs approvals from central banks. To know more about Hyperledger, take a look at our article Hyperledger: The Enterprise-ready Blockchain Implementing blockchain in finance and banking will enable simplicity and operational efficiency while also enhancing the end customer experience. Many of the current challenges can be solved and the time involved can be brought down to minutes from hours. To know more visit the IBM website. Read next: Google Cloud Launches Blockchain Toolkit to help developers build apps easily Oracle makes its Blockchain cloud service generally available Blockchain can solve tech’s trust issues – Imran Bashir
Read more
  • 0
  • 0
  • 13728

article-image-cryptojacking-growing-cybersecurity-threat-report-warns
Richard Gall
11 Apr 2018
2 min read
Save for later

Cryptojacking is a growing cybersecurity threat, report warns

Richard Gall
11 Apr 2018
2 min read
Cryptojacking is a growing threat to users, a UK cyber security agency warns. In its Cyber Threat to UK Business report, the UK's National Cyber Security Centre (NCSC), outlines the growing use of cryptojacking as a method of mining bitcoin by stealth. The report quotes an earlier study by Checkpoint, done at the end of 2017, indicating that 55% of businesses globally had been impacted by the technique. One of the most interesting aspects of cryptojacking is how it's blurring the lines of cybercriminality. Although the NCSC 'assumes' that it is ultimately a new technique being used by experienced cyber criminals, the report also notes that websites - without necessarily having any record of cybercrime - are using it as a way of mining cryptocurrencies without users' knowledge. It's worth noting that back in February, Salon gave users the option to supress ads in return for using their computing power. This was essentially a legitimate and transparent form of cryptocurrency mining. What is cryptojacking? Cryptojacking is a method whereby a website visitor's CPU is 'hijacked' by a piece of JavaScript code that runs when the user accesses a specific webpage. This code then allows cybercriminals to 'mine' cryptocurrencies (at present Monero) without users' knowledge. The NCSC report gives an example of this in action. According to the report, more than 4,000 websites "mined cryptocurrency through a compromised screen-reading plugin for blind and partially sighted people." Cryptojacking looks set, then, to become a larger problem within the cybersecurity world. Because it's so hard for users to identify that they are being exploited, it's likely that this will be difficult to tackle. However, technology savvy users are already creating solutions to protect from cryptojacking. This will effectively become the next wave of ad blockers. It will be interesting to see whether this does, in fact, become a model that the media industry takes on to tackle struggling revenues. Could Salon's trial lead to the increased adoption of legitimate cryptojacking as a revenue stream? Whatever happens, user consent is going to remain an issue. Source: Coindesk Vevo’s YouTube account Hacked: Popular videos deleted Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 13726

article-image-google-shares-initiatives-towards-enforcing-its-ai-principles-employs-a-formal-review-structure-for-new-projects
Bhagyashree R
20 Dec 2018
3 min read
Save for later

Google shares initiatives towards enforcing its AI principles; employs a formal review structure for new projects

Bhagyashree R
20 Dec 2018
3 min read
Earlier this year, Sundar Pichai shared seven AI principles that Google aims to follow in its work. Google also shared some best practices for building responsible AI. Yesterday, they shared the additional initiative and processes they have introduced to live up to their AI principles. Some of these initiatives include educating people about ethics in technology, introducing a formal review structure for new projects, products, and deals. Educating Googlers on ethical AI Making Googlers aware of the ethical issues: Additional learning material has been added to the Ethics in Technology Practice course that teaches technical and non-technical Googlers about how they can address the ethical issues that arise while at work. In the future, Google is planning to make this course accessible for everyone across the company. Introducing AI Ethics Speaker Series: This series features external experts across different countries, regions, and professional disciplines. So far, eight sessions have been conducted with 11 speakers covering topics from bias in natural language processing (NLP) to the use of AI in criminal justice. AI fairness: A new module on fairness is added to Google’s free Machine Learning Crash Course. This course is available in 11 languages and is being used by more than 21,000 Google employees. The fairness module explores different types of human biases that can corp in the training data and also provide strategies to identify and evaluate their effects. Review structure for new projects, products, and deals Google has employed a formal review structure to check the scaling, severity, and likelihood of best- and worst-case scenarios of new projects, products, and deals. This review structure consists of three core groups: Innovation team: This team consists of user researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, and legal experts. They are responsible for day-to-day operations and initial assessments. Senior experts: This group consists of senior experts from a range of disciplines across Alphabet Inc.. They provide technological, functional, and application expertise. Council of senior executives: This group handles the decisions that affect multiple products and technologies. Currently, more than 100 reviews have been done under this formal review structure. In the future, Google plans to create an external advisory group, which will comprise of experts from a variety of disciplines. To read more about Google’s initiatives towards ethical AI, check out their official announcement. Google won’t sell its facial recognition technology until questions around tech and policy are sorted Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google kills another product: Fusion tables
Read more
  • 0
  • 0
  • 13722
article-image-github-updates-developers-and-policymakers-on-eu-copyright-directive-at-brussels
Savia Lobo
25 Oct 2018
2 min read
Save for later

GitHub updates developers and policymakers on EU copyright Directive at Brussels

Savia Lobo
25 Oct 2018
2 min read
On Tuesday, the 16th of October, GitHub hosted Open Source and Copyright: from Industry 4.0 to SMEs in Brussels. Partnering with OpenForum Europe and Red Hat, the event was designed to raise awareness of the EU Copyright Directive among developers and policymakers. GitHub has made its position on the controversial legislation clear, saying that while “current copyright laws are outdated in many respects and need modernization, we are concerned that some aspects of the EU’s proposed copyright reform package would inadvertently affect software.” The event included further discussion on topics such as: Policy: For GitHub, Abby Vollmer shared how developers have been especially effective in getting policymakers to respond to problems with the copyright proposal and asked them to continue reaching out to policymakers about a technical fix to protect open source. Developers: Evis Barbullushi from Red Hat explained why open source is so fundamental to software and critical to the EU, using examples of what open source powers every day. He also highlighted the world-class and commercially mainstream nature of open source. SMEs: Sebastiano Toffaletti (from the European Digital SME Alliance) described concerns about the copyright proposal from the perspective of SMEs, including how efforts to regulate large platforms can end up harming SMEs even if they’re not the target. Research and academia: Roberto Di Cosmo (Software Heritage) wrapped up the talks by noting that he “should not be here, because, in a world in which software was better understood and valued, policymakers would never introduce a proposal that inadvertently puts software at great risk, and motivated developers to fix this underlying problem.” In its previous EU copyright proposal update, GitHub explained that the EU Council, Parliament, and Commission were ready to begin final-stage negotiations of the copyright proposal. These three institutions are now working on the exceptions to copyright for text and data mining (Article 3), among other technical elements of the proposal. Article 13 would likely drive many platforms to use upload filters on user-generated content. Article 2 defines which services are in the scope of Article 13, Articles 2 and 13 will be discussed together. This means developers can still contact policymakers with thoughts on what outcomes are best for software development. The LLVM project is ditching SVN for GitHub. The migration to Github has begun. GitHub Business Cloud is now FedRAMP authorized What we learnt from the GitHub Octoverse 2018 Report
Read more
  • 0
  • 0
  • 13719

article-image-mariadb-announces-the-release-of-mariadb-enterprise-server-10-4
Amrata Joshi
12 Jun 2019
4 min read
Save for later

MariaDB announces the release of MariaDB Enterprise Server 10.4

Amrata Joshi
12 Jun 2019
4 min read
Yesterday, the team at MariaDB announced the release of MariaDB Enterprise Server 10.4, which is code named as “restful nights”. It is a hardened and secured server which is also different from MariaDB’s Community Server. This release is focused on solving enterprise customer needs, offering them greater reliability, stability as well as long-term support in production environments. MariaDB Enterprise Server 10.4 and its backported versions will be available to customers by the end of the month as part of the MariaDB Platform subscription. https://twitter.com/mariadb/status/1138737719553798144 The official blog post reads, “For the past couple of years, we have been collaborating very closely with some of our large enterprise customers. From that collaboration, it has become clear that their needs differ vastly from that of the average community user. Not only do they have different requirements on quality and robustness, they also have different requirements for features to support production environments. That’s why we decided to invest heavily into creating a MariaDB Enterprise Server, to address the needs of our customers with mission critical production workloads.” MariaDB Enterprise Server 10.4 comes with added functionality for enterprises that are running MariaDB at scale in production environments. It also involves new levels of testing and is shipped in by default secure configuration. It also includes the same features of MariaDB Server 10.4, including bitemporal tables, an expanded set of instant schema changes and a number of improvements to authentication and authorization (e.g., password expiration and automatic/manual account locking) Max Mether, VP of Server Product Management, MariaDB Corporation, wrote in an email to us, “The new version of MariaDB Server is a hardened database that transforms open source into enterprise open source.” He further added, “We worked closely with our customers to add the features and quality they need to run in the most demanding production environments out-of-the-box. With MariaDB Enterprise Server, we’re focused on top-notch quality, comprehensive security, fast bug fixes and features that let our customers run at internet-scale performance without downtime.” James Curtis, Senior Analyst, Data Platforms and Analytics, 451 Research, said, “MariaDB has maintained a solid place in the database landscape during the past few years.” He added, “The company is taking steps to build on this foundation and expand its market presence with the introduction of MariaDB Enterprise Server, an open source, enterprise-grade offering targeted at enterprise clients anxious to stand up production-grade MariaDB environments.” Reliability and stability MariaDB Enterprise Server 10.4 offers reliability and stability that is required for production environments. In this server, even bugs are fixed that further help in maintaining reliability. The key enterprise features are backported for the ones running earlier versions of MariaDB Server, and provide long-term support. Security Unsecured databases are most of the times the reason for data breaches. But the MariaDB Enterprise Server 10.4is configured with security settings to support enterprise applications. All non-GA plugins will be disabled by default in order to reduce the risks incurred when using unsupported features. Further, the default configuration is changed to enforce strong security, durability and consistency. Enterprise backup MariaDB Enterprise Server 10.4 offers enterprise backup that brings operational efficiency to customers with large databases and further breaks up the backups into non-blocking stages. So this way, writes and schema changes can occur during backups than waiting for backup to complete. Auditing capabilities This server adds secure, stronger and easier auditing capabilities by logging all changes to the audit configuration. It also logs detailed connection information that gives the customers a comprehensive view of changes made to the database. End-to-end encryption It also offers end-to-end encryption for multi-master clusters where the transaction buffers are encrypted that ensure that the data is secure. https://twitter.com/holgermu/status/1138511727610478594 Learn more about this news on the official web page. MariaDB CEO says big proprietary cloud vendors “strip-mining open-source technologies and companies” MariaDB announces MariaDB Enterprise Server and welcomes Amazon’s Mark Porter as an advisor to the board of directors TiDB open sources its MySQL/MariaDB compatible data migration (DM) tool
Read more
  • 0
  • 0
  • 13716

article-image-sap-cloud-platform-is-now-generally-available-on-microsoft-azure
Savia Lobo
11 Jun 2018
3 min read
Save for later

SAP Cloud Platform is now generally available on Microsoft Azure

Savia Lobo
11 Jun 2018
3 min read
Microsoft stated that its recent addition, SAP Cloud Platform is now generally on its Azure Platform. The SAP cloud platform enables developers to build SAP applications and extensions using PaaS development platform along with integrated services. With the SAP platform becoming generally available, developers can now deploy Cloud Foundry-based SAP Cloud Platform on Azure. This is currently available in the West Europe region and Microsoft is working with SAP to enable more regions to use it in the months to come. With SAP HANA’s availability on Microsoft Azure, one can expect: Largest SAP HANA optimized VM size in the cloud Microsoft would be soon launching an Azure-M series, which will support large memory virtual machines with sizes up to 12 TB, which would be based on Intel Xeon Scalable (Skylake) processors and will offer the most memory available of any VM in the public cloud. The M series will help customers to push the limits of virtualization in the cloud for SAP HANA. Availability of a range of SAP HANA certified VMs For customers who wish to use small instances, Microsoft also offers smaller M-series VM sizes. These range from 192 GB – 4 TB with 10 different VM sizes and extend Azure’s SAP HANA certified M-series VM. These smaller M-series offer on-demand and SAP certified instances with a flexibility to spin-up or scale-up in less time. It also offers instances to spin-down to save costs within a pay-as-you-go model available worldwide. Such a flexibility and agility is not possible with a private cloud or on-premises SAP HANA deployment. 24 TB bare-metal instance and optimized price per TB For customers that need a higher performance dedicated offering for SAP HANA, Microsoft now offers additional SAP HANA TDIv5 options of 6 TB, 12 TB, 18 TB, and 24 TB configurations in addition to their current configurations from 0.7TB to 20 TB. For customers who require more memory but the same number of cores, these configurations enable them to get a better price per TB deployed. A lot more options for SAP HANA in the cloud SAP HANA has 26 distinct offerings from 192 GB to 24 TB, scale-up certification up to 20 TB and scale-out certification up to 60 TB. It offers global availability in 12 regions with plans to increase to 22 regions in the next 6 months, Azure now offers the most choice for SAP HANA workloads of any public cloud. Microsoft Azure also enables customers, To extract insights and analytics from SAP data with services such as Azure Data Factory SAP HANA connector to automate data pipelines Azure Data Lake Store for hyper-scale data storage and Power BI An industry leading self-service visualization tool, to create rich dashboards and reports from SAP ERP data. Read more about this news about SAP Cloud Platform on Azure, on Microsoft Azure blog. How to perform predictive forecasting in SAP Analytics Cloud Epicor partners with Microsoft Azure to adopt Cloud ERP New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL
Read more
  • 0
  • 0
  • 13714
article-image-fake-news-is-a-danger-to-democracy-these-researchers-are-using-deep-learning-to-model-fake-news-to-understand-its-impact-on-elections
Amrata Joshi
05 Nov 2018
6 min read
Save for later

Fake news is a danger to democracy. These researchers are using deep learning to model fake news to understand its impact on elections.

Amrata Joshi
05 Nov 2018
6 min read
Last month, researchers from the University of Surrey and St Petersburg National Research University of Information Technologies, Mechanics and Optics, published a paper, titled, How to model fake news. The paper states, “Until now there has been no clear understanding of how to define fake news, much less how to model it. This paper addresses both these issues. A definition of fake news is given, and two approaches for the modelling of fake news and its impact in elections and referendums are introduced.” Why all the fuss about fake news According to the researchers, “fake news is information that is inconsistent with factual reality. It is information that originates from the ‘sender’ of fake news, is transmitted through a communication channel and is then received, typically, by the general public. Hence any realistic model for fake news has to be built on the successful and well-established framework of communication theory.” The false stories on the internet have made it difficult for many to distinguish what is true from what is false. Fake news is a threat to the democratic process in the US, the UK, and elsewhere. For instance, the existence of Holocaust denialists illustrates how doubts about a major historical event can gain traction with certain individuals. Fake news has become a serious concern to society so much so that it can even endanger the democratic process. The gravity of this issue has become widely acknowledged especially after the 2016 US presidential election and the ‘Brexit’ referendum in the UK on the membership of the European Union. How are researchers trying to model fake news The researchers have presented two approaches for the modeling of fake news in elections, The first is based on the idea of a representative voter, useful to obtain a qualitative understanding of the effects of fake news. The other is based on the idea of an election microstructure, useful for practical implementation in concrete scenarios. The researchers categorized voters into two categories, namely Category I voters and Category II voters. Category I voters are the ones who are unaware of the existence of the fake news. Category II voters are the ones that possess the knowledge that there may be fake news in circulation, but they do not know how many pieces of fake news have been released, or at what time. Approach 1: Using a Representative Voter Framework According to the first approach, those who are influenced by fake news are not viewed as being irrational. But they lack the ability to detect and mitigate the changes caused by the fake news. The transition from the behavioral model of an individual to that of the electorate brings the idea of a ‘representative voter’ whose perception represents the aggregation of the diverse views held by the public at large. The category I voters are an example of the representative voter as they can’t detect or mitigate the fake news. The researchers examine the problem of estimating the release times of fake news that generates a new type of challenge in communication theory. This estimate is required for characterizing a voter. The voter is aware of the potential presence of fake news but is not sure which items of information are fake. The researchers show an illustration of the dynamics of opinion-poll statistics in a referendum in the presence of a single piece of fake news. An application to an election where multiple pieces of news (fake) are released at random times is considered. For instance,  the qualitative behavior of the dynamics of the opinion-poll statistics during the 2016 US presidential election can be replicated by the model suggested by the researchers. Approach 2: Using an ‘election microstructure’ model Further analysis done by the researchers introduce an ‘election microstructure’ model in which an information-based scheme is used to describe the dynamical behavior of individual voters and the resulting collective voting behavior of the electorate under the influence of fake news. The category II voters are an example of election microstructure as these voters have information about the fake news but aren’t aware of the entire pieces of news and the precise time. The modeling framework proposed in this paper uses Wiener’s philosophy. The authors have applied and extended techniques of filtering theory, which is a branch of communication theory that aims at filtering out noise in communication channels, in a novel way to generate models that are well-suited for the treatment of fake news. The mathematical aspect used in election microstructure model is the same as the one in representative voter framework. The only difference is that in election microstructure model the signal in the information process can be transmitted by a sender (e.g., the candidate). Deep learning can be used to solve the problem of fake news According to the researchers, techniques like Deep learning and other related techniques can help in the detection and prevention of fake news. However, to address the issues surrounding the impact, it is important that a consistent mathematical model is developed that describes the phenomena resulting from the flow of fake news. Such a model should be intuitive and tractable, so that model parameter can be calibrated against real data, and so that predictions can be made, either analytically or numerically. In both the approaches, suggested by the researchers, the results illustrate the impact of fake news in elections and referendums. The researchers have further demonstrated that by merely estimating the presence of fake news, an individual is able to largely mitigate the effects. The researchers have described the two categories of voters and they further conclude that the Category II voters know the parameters of the fake news terms. Future Scope The researchers plan to include such optimal release strategies in their models. The election microstructure approach might also get developed further by allowing dependencies between the various factors. Also, the researchers plan to introduce several different information processes reflecting the news consumption preferences of different sections of society. These additions might be challenging but an interesting direction for research might come up. To know more about the modeling techniques for fake news, check out the paper How to model fake news. BabyAI: A research platform for grounded language learning with human in the loop, by Yoshua Bengio et al Facebook is reportedly rating users on how trustworthy they are at flagging fake news Four 2018 Facebook patents to battle fake news and improve news feed
Read more
  • 0
  • 0
  • 13708

article-image-what-is-facebook-hiding-new-york-times-reveals-facebooks-insidious-crisis-management-strategy
Melisha Dsouza
15 Nov 2018
9 min read
Save for later

What is Facebook hiding? New York Times reveals Facebook’s insidious crisis management strategy

Melisha Dsouza
15 Nov 2018
9 min read
Today has been Facebook’s worst day in its history. As if the plummeting stocks that closed on  Wednesday at just $144.22.were not enough, Facebook is now facing backlash on its leadership morales. Yesterday, the New York Times published a scathing expose on how Facebook wilfully downplayed its knowledge of the 2016 Russian meddling of US elections via its platform. In addition, it also alleges that over the course of two years, Facebook has adopted a ‘delay, deny and deflect’ strategy under the shrewd leadership of Sheryl Sandberg and the disconnected from reality, Facebook CEO, Mark Zuckerberg, to continually maneuver through the chain of scandals the company has been plagued with. In the following sections, we dissect the NYT article and also loo at other related developments that have been triggered in the wake of this news. Facebook, with over 2.2 billion users globally, has accumulated one of the largest-ever repositories of personal data, including user photos, messages and likes that propelled the company into the Fortune 500. Its platform has been used to make or break political campaigns, advertising business and reshape the daily life around the world. There have been constant questions raised on the security of this platform and all credit goes to the various controversies surrounding Facebook since well over two years. While Facebook’s response to these scandals (“we should have done better”) have not convinced many, Facebook has never been considered ‘knowingly evil’ and continued enjoyed the benefit of the doubt. The Times article now changes that. Crisis management at Facebook: Delay, deny, deflect The report by the New York Times is based on anonymous interviews with more than 50 people, including current and former Facebook executives and other employees, lawmakers and government officials, lobbyists and congressional staff members. Over the past few years, Facebook has grown, so has the hate speech, bullying and other toxic content on the platform.  It hasn't fully taken responsibility for what users posted turning a blind eye and carrying on as it is- a platform and not a Publisher. The report highlights the dilemma Facebook leadership faces while deciding on candidate Trump’s statement on Facebook in 2015 calling for a “total and complete shutdown” on Muslims entering the United States. After a lengthy discussion, Mr. Schrage (a prosecutor whom Ms. Sandberg had recruited)  concluded that Mr. Trump’s language had “not violated Facebook’s rules”. Mr. Kaplan (Facebook’s Vice President of global public policy) argued that Mr. Trump was an important public figure, and shutting down his account or removing the statement would be perceived as obstructing free speech leading to a conservative backlash. Sandberg decided to allow the poston Facebook. In the spring of 2016, Mr. Alex Stamos (Facebook’s former security chief) and his team discovered Russian hackers probing Facebook accounts for people connected to the presidential campaign along with Facebook accounts linked to Russian hackers who messaged journalists to share information from the stolen emails. Mr. Stamos directed a team to scrutinize the extent of Russian activity on Facebook. By January 2017, it was clear that there was more to the Russian activity on Facebook. Mr. Kaplan believed that if Facebook implicated Russia further,  Republicans would “accuse the company of siding with Democrats” and pulling  down the Russians’ fake pages would offend regular Facebook users as having been deceived. To summarize their findings, Mr. Zuckerberg and Ms. Sandberg released a  blog post  on 6th September 2017. The post had little information on fake accounts or the organic posts created by Russian trolls gone viral on Facebook. You can head over to New York Times to read in depth about what went on in the company post reported scandals. What is also surprising, is that instead of offering a clear explanation to the matters at hand, the company was more focused on taking a stab at those who make statements against Facebook. Take for instance , Apple CEO Tim Cook who criticized Facebook in an MSNBC interview  and called facebook a service that traffics “in your personal life.” According to the Times, Mark Zuckerberg has reportedly told his employees to only use Android Phones in lieu of this statement. Over 70 human rights group write to Zuckerberg Fresh reports have now emerged that the Electronic Frontier Foundation, Human Rights Watch, and over 70 other groups have written an open letter to Mark Zuckerberg  to adopt a clearer “due process” system for content takedowns.  “Civil society groups around the globe have criticized the way that Facebook’s Community Standards exhibit bias and are unevenly applied across different languages and cultural contexts,” the letter says. “Offering a remedy mechanism, as well as more transparency, will go a long way toward supporting user expression.” Zuckerberg rejects facetime call for answers from five parliaments “The fact that he has continually declined to give evidence, not just to my committee, but now to an unprecedented international grand committee, makes him look like he’s got something to hide.” -DCMS chair Damian Collins On October 31st, Zuckerberg was invited to give evidence before a UK parliamentary committee on 27th November, with politicians from Canada co-signing the invitation. The committee needed answers related to Facebook “platform’s malign use in world affairs and democratic process”. Zuckerberg rejected the request on November 2nd.  In yet another attempt to obtain answers, MPs from Argentina, Australia, Canada, Ireland and the UK  joined forces with UK’s Digital, Culture, Media and Sport committee requesting a facetime call with Mark Zuckerberg last week. However, in a letter to DCMS, Facebook declined the request, stating: “Thank you for the invitation to appear before your Grand Committee. As we explained in our letter of November 2nd, Mr. Zuckerberg is not able to be in London on November 27th for your hearing and sends his apologies.” The letter does not explain why Zuckerberg is unavailable to speak to the committee via a video call. The letter summarizes a list of Facebook activities and related research that intersects with the topics of election interference, political ads, disinformation and security.  It makes no mention of the company’s controversial actions and their after effects. Diverting scrutiny from the matter? According to the NYT report, Facebook reportedly expanded its relationship with a Washington-based public relations consultancy with Republican ties in October 2017 after an entire year dedicated to external criticism over its handling of Russian interference on its social network. The firm last year wrote dozens of articles that criticized facebook’s  rivals Google and Apple while diverting focus from the impact of Russian interference on Facebook  It pushed the idea that liberal financier George Soros was behind a growing anti-Facebook movement, according to the New York Times. The PR team also reportedly pressed reporters to explore Soros' financial connections with groups that protested Facebook at Congressional hearings in July. How are employees and users reacting? According to the Wall Street Journal, only 52 percent of employees say that they're optimistic about Facebook's  future . As compared to 2017, 84 percent were optimistic about working at Facebook. Just under 29,000 workers (of more than 33,000 in total)  participated in the biannual pulse survey. In the most recent poll conducted in October, statistics have fallen-  like its tumbling stock market - as compared to last year's survey. Just over half feel Facebook was making the world a better place which was at 19 percentage last year. 70 percent said they were proud to work at Facebook, down from 87 percent, and overall favorability towards the company dropped from 73 to 70 percent since last October's poll. Around 12 percent apparently plan to leave within a year. Hacker news has comments from users stating that “Facebook needs to get its act together” and “are in need for serious reform”. Some also feel that “This Times piece should be taken seriously by FB, it's shareholders, employees, and users. With good sourcing, this paints a very immature picture of the company, from leadership on down to the users”. Readers have pointed out that Facebook’s integrity is questionable and that  “employees are doing what they can to preserve their own integrity with their friends/family/community, and that this push is strong enough to shape the development of the platform for the better, instead of towards further addictive, attention-grabbing, echo chamber construction.” Facebook’s reply on the New York Times Report Today, Facebook published a post in response to the Time’s report, listing the number of inaccuracies in their post. Facebook asserts that they have been closely following the Russian investigation, along with reasons for not citing Russia’s name in the April 2017 white paper. The company has also addressed the backlash it faced for the “Muslim ban” statement by Trump which was not taken down. Facebook strongly supports Mark and Sheryl in the fight against false news and information operations on Facebook.along with reasons  for Sheryl championing Sex Trafficking Legislation. Finally, in response to the controversy to advising employees to use only Android, they clarified that it was because “it is the most popular operating system in the world”. In response to hiring a PR team Definers, Facebook says that “We ended our contract with Definers last night. The New York Times is wrong to suggest that we ever asked Definers to pay for or write articles on Facebook’s behalf – or to spread misinformation.” We can’t help but notice that again, Facebook is defending itself against allegations but not providing a proper explanation for why it finds itself in controversies time and again. It is also surprising that the contract with Definers abruptly came to an end just before the report went live by the Times. What Facebook has additionally done is emphasized about improved security practices at the company, something which it has been talking about everytime they face a controversy. It is time to stop delaying, denying and deflecting. Instead, atone, accept, and act responsibly. Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior” Emmanuel Macron teams up with Facebook in a bid to fight hate speech on social media Facebook GEneral Matrix Multiplication (FBGEMM), high-performance kernel library, open sourced, to run deep learning models efficiently
Read more
  • 0
  • 0
  • 13707
Modal Close icon
Modal Close icon