Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-github-is-bringing-back-game-off-its-sixth-annual-game-building-competition-in-november
Natasha Mathur
16 Oct 2018
3 min read
Save for later

GitHub is bringing back Game Off, its sixth annual game building competition, in November

Natasha Mathur
16 Oct 2018
3 min read
The GitHub team announced yesterday that they’re coming back with their sixth annual game jam, called Game Off in November. In Game Off, participants are given one-month to create games based on a theme provided by GitHub. Anyone including newbies and professional game developers can participate, without any restrictions. Moreover, the game can be simple or complex, depending upon your preference. The Game Off team recommends using open source game engines, libraries, and tools but you can make use of any technology you want, as long as it is reliable. Also, both team and solo participation are acceptable. In fact, you can also make multiple submissions. The theme for the game off last year was “throwback”. There were over 200 games created including old-school LCD games, retro flight simulators, squirrel-infested platformers, etc. This year’s theme will be announced on Thursday, November 1st, at 13:37 pm. Last year Game off’s overall winner and the one that was voted best gameplay was a game called Daemon vs. Demon. This game included a hero that was supposed to slay rogue demons to continue remaining in the world of the living. This game was built by a user named Securas from Japan, with the open source Godot game engine. There were other categories such as best audio, best theme interpretation, best graphics, etc, for which winners were picked. To participate in Game Off, it is necessary for you to have a GitHub account. Then, you can join the Game Off challenge on itch.io. You don’t need to have a separate itch.io account, you can simply log in with your GitHub account. Once you’re done with creating an itch.io account, all you need to do is create a new repository to store the source code and other related assets. Just make sure that you push your changes to the game before December 1st. “As always, we'll highlight some of our favorites games on the GitHub Blog, and the world will get to enjoy (and maybe even contribute to or learn from) your creations”, mentions the GitHub team. For more information, check out the official Game off announcement. Meet wideNES: A new tool by Nintendo to let you experience the NES classics again Meet yuzu – an experimental emulator for the Nintendo Switch Now you can play Assassin’s Creed in Chrome thanks to Google’s new game streaming service
Read more
  • 0
  • 0
  • 13570

article-image-mit-plans-to-invest-1-billion-in-a-new-college-of-computing-that-will-serve-as-an-interdisciplinary-hub-for-computer-science-ai-data-science
Bhagyashree R
16 Oct 2018
3 min read
Save for later

MIT plans to invest $1 billion in a new College of computing that will serve as an interdisciplinary hub for computer science, AI, data science

Bhagyashree R
16 Oct 2018
3 min read
Yesterday, MIT announced that they are investing $1 billion for establishing a new college for computing: MIT Schwarzman College of Computing. This college is named after Mr. Schwarzman, the chairman, CEO and co-founder of Blackstone who has contributed $350 million for the college.The college will be dedicated for work in computer science, AI, data science, and related fields. This initiative, according to MIT, is the single largest investment in computing and AI by any American academic institution. MIT Schwarzman College of Computing aims to teach students the foundations of computing. Students will also learn how machine learning and data science can be applied in real life. A curriculum will be designed to satisfy the growing interest in majors that cross computer science with other disciplines. Along with teaching advanced computing, the school will also focus on teaching and researching on relevant policy and ethics. This will educate students about responsibly using these advanced technologies in support of the greater good. Rafael Reif, MIT President, believes that this college will help students and researchers from various disciplines to use computing and AI to advance their disciplines: “As computing reshapes our world, MIT intends to help make sure it does so for the good of all. In keeping with the scope of this challenge, we are reshaping MIT. The MIT Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new AI tools. Just as important, the College will equip students and researchers in any discipline to use computing and AI to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work.” To attract distinguished individuals from other universities, government, industry, and journalism, the school plans to offer various opportunities. These include selective undergraduate research opportunities, graduate fellowships in ethics and AI, a seed-grant program for faculty, and a fellowship program. Along with these exciting opportunities, fifty new faculty positions will be created, out of which 25 will be appointed to advance computing in the college, and the other 25 will be appointed jointly in the college and departments across MIT. Currently, they have raised a total funding of $650 million of the $1 billion required for the college and its senior administration is actively looking for more contributors. Among the top partners in this initiative is IBM. Ginni Rometty, IBM chairman, president, and CEO, said: “As MIT’s partner in shaping the future of AI, IBM is excited by this new initiative. The establishment of the MIT Schwarzman College of Computing is an unprecedented investment in the promise of this technology. It will build powerfully on the pioneering research taking place through the MIT-IBM Watson AI Lab. Together, we will continue to unlock the massive potential of AI and explore its ethical and economic impacts on society.” MIT Schwarzman College of Computing is one of the most significant structural change to MIT since the early 1950s. According to the official announcement, the college is likely to open in September, next year and the construction work is scheduled to complete in 2022. To read the full announcement, head over to MIT’s official website. MIT’s Transparency by Design Network: A high performance model that uses visual reasoning for machine interpretability IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware Video-to-video synthesis method: A GAN by NVIDIA & MIT CSAIL is now Open source
Read more
  • 0
  • 0
  • 13455

article-image-indeed-lists-top-10-skills-to-land-a-lucrative-job-building-autonomous-vehicles
Melisha Dsouza
16 Oct 2018
3 min read
Save for later

Indeed lists top 10 skills to land a lucrative job, building autonomous vehicles

Melisha Dsouza
16 Oct 2018
3 min read
It is predicted that, by 2025, the car market for partially autonomous vehicles will hit 36 billion U.S. dollars. The autonomous car is expected to take the market by storm. A thriving example is the Tesla Autopilot, which has driven over 1.4 billion estimated miles as this post goes live. As the autonomous car sector grows,  skilled individuals are highly in demand for employment in this domain. Top companies have already started their hiring process to get people on board. Last week the Indeeds analytics team put together a list of companies that had job descriptions related to ‘autonomous vehicles’. Here is the list of top companies hiring for autonomous vehicle jobs: Source: Indeed.com The company to top the charts is  Aptiv. Aptiv operates out of the Detroit metro area and is focused on self-driving and connected vehicles. The company plans to add around 5,000 to 6,000 employees . Following Aptiv is NVIDIA, a company well known for making chip units. NVIDIA is responsible for making the computers that power self-driving capabilities in every Tesla vehicle. Along with Tesla, the company has also partnered with Audi to build autonomous-driving capabilities. Two of the biggest auto manufacturers based in Detroit, General Motors and Ford, are at number three and number four respectively. Both companies have shown interest and invested heavily in recent years in technology for autonomous vehicles. The rest of the list comprises of newer companies testing the waters of autonomous vehicles. Intel, surprisingly stands at number eight. Looks like this company- known for making semiconductor chips and personal computer microprocessors- is also showing a growing interest in this domain. Samsung Semiconductor also makes it to this list along with Flex. Skills needed for jobs in autonomous vehicle According to Indeed, here is the list of the top 10 skills individuals looking for a job in the self-driving car domain must possess. Source: Indeed.com As seen from the list, most of these skills are programming related. This list comes as a  surprise to automobile engineers who are not concerned with software development at all. Along with programming languages like C, C++; individuals are also expected to have a sound knowledge on Image processing and Artificial Intelligence. This is not surprising, considering that posts for AI-related roles on Indeed have almost doubled between June 2015 and June 2018. While there is no strong evidence that this sector will flourish in the future, it is clear that companies have their eye on this domain. It would be interesting to see the kind of skill set this domain encourages individuals to develop. To know more about this report, head over to Indeed.com. Tesla is building its own AI hardware for self-driving cars This self-driving car can drive in its imagination using deep reinforcement learning Baidu Apollo autonomous driving vehicles gets machine learning based auto-calibration system  
Read more
  • 0
  • 0
  • 12164

article-image-twilio-acquires-sendgrid-a-leading-email-api-platform-to-bring-email-services-to-its-customers
Natasha Mathur
16 Oct 2018
3 min read
Save for later

Twilio acquires SendGrid, a leading Email API Platform, to bring email services to its customers

Natasha Mathur
16 Oct 2018
3 min read
Twilio Inc., a universal cloud communications platform announced yesterday, that it is acquiring SendGrid, a leading email API platform. Twilio focussed mainly on providing voice calling, text messaging, video, web, and mobile chat services. SendGrid, on the other hand, focused purely on providing email services. With this acquisition, Twilio aims to bring tremendous value to the combined customer bases by offering services around voice, video, chat as well as email. “Email is a vital communications channel for companies around the world, and so it was important to us to include this capability in our platform. The two companies share the same vision, the same model, and the same values,” mentioned Jeff Lawson, Twilio's co-founder and chief executive officer. The two companies will also be focussing on making it easy for developers to build a communications platform by delivering a single, best-in-class platform for developers. This would help them better manage all of their important communication channels including voice, messaging, video, and email. Moreover, as per the terms of the deal, SendGrid will become a wholly-owned subsidiary of Twilio. Once the deal is closed, SendGrid’s common stock will get converted into Twilio’s stock. “At closing, each outstanding share of SendGrid common stock will be converted into the right to receive 0.485 shares of Twilio Class A common stock, which represents a per share price for SendGrid common stock of $36.92 based on the closing price of Twilio Class A common stock on October 15, 2018. The exchange ratio represents a 14% premium over the average exchange ratio for the ten calendar days ending, October 15, 2018”, reads Twilio’s press release. The boards of directors of both Twilio and SendGrid have approved the above-mentioned transaction. “Our two companies have always shared a common goal - to create powerful communications experiences for businesses by enabling developers to easily embed communications into the software they are building. Our mission is to help our customers deliver communications that drive engagement and growth, and this combination will allow us to accelerate that mission for our customers”, said Sameer Dholakia, SendGrid’s CEO. The acquisition will be closing in the first half of 2019. This is subject to the satisfaction of customary closing conditions, including approvals by shareholders of each SendGrid’s and Twilio’s. “We believe this is a once-in-a-lifetime opportunity to bring together the two leading developer-focused communications platforms to create the unquestioned platform of choice for all companies looking to transform their customer engagement”, said Lawson. For more information, check out the official Twilio press release. Twilio WhatsApp API: A great tool to reach new businesses Make phone calls and send SMS messages from your website using Twilio Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 10517

article-image-platform9-announces-a-new-release-of-fission-io-the-open-source-kubernetes-native-serverless-framework
Sugandha Lahoti
16 Oct 2018
3 min read
Save for later

Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework

Sugandha Lahoti
16 Oct 2018
3 min read
Platform9 is announcing a new release of Fission.io, the open source, Kubernetes-native Serverless framework.  It’s new features enable developers and IT Operations to improve the quality and reliability of serverless applications. Fission comes with built-in Live-reload and Record-replay capabilities to simplify testing and accelerate feedback loops. Other new features include Automated Canary Deployments to reduce the risk of failed releases, Prometheus integration for automated monitoring and alerts, and fine-grained cost and performance optimization capabilities. With this latest release, Fission also allows Dev and Ops teams to safely adopt Serverless and benefit from the speed, cost savings, and scalability of this cloud-native development pattern on public cloud or on-premises. Let’s look at the features in detail. Live-reload: Test as you type With Live-reload, Fission automatically deploys the code as it is written into a live Kubernetes test cluster. It allows developers to toggle between their development environment and the runtime of the function, to rapidly iterate through their coding and testing cycles. Record-replay: Simplify testing and debug (Image via Fission) Record-replay automatically saves events that trigger serverless functions and allows for the replaying of these events on demand. Record-replay can also reproduce complex failures during testing or debugging, simplify regression testing, and troubleshoot issues. Operations teams can use recording on a subset of live production traffic to help engineers reproduce issues or verify application updates. Automated Canary Deployments: Reduce the risk of failed releases Fission provides fully automated Canary Deployments that are easy to configure. With AutomatedCanary Deployments, it automatically increments traffic proportions to the newer version of the function as long as it succeeds and rolls back to the old version if the new version fails. Prometheus Integration: Easy metrics collection and alerts Integration with Prometheus enables automatic aggregation of function metrics, including the number of functions called, function execution time, success, failures, and more. Users can also define custom alerts for key events, such as for when a function fails or takes too long to execute. Prometheus metrics can also feed monitoring dashboards to visualize application metrics. (Image via Fission) One of Fission’s users Kenneth Lam, Director of Technology at Snapfish said, “Fission allows our company to benefit from the speed, cost savings and scalability of a cloud-native development pattern on any environment we choose, whether it be the public cloud or on-prem.” You can learn more about Fission on its website. You can also go through a quick demo of all the new features in Fission. How to deploy Serverless Applications in Go using AWS Lambda [Tutorial]. Azure Functions 2.0 launches with better workload support for serverless. How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 11370

Prasad Ramesh
16 Oct 2018
3 min read
Save for later

Tesla v9 to incorporate neural networks for autopilot

Prasad Ramesh
16 Oct 2018
3 min read
Tesla v9 to incorporate neural networks for autopilot Tesla, the car maker founded by Elon Musk is incorporating larger neural networks for autopilot in the new Tesla v9. Based on the new Autopilot capabilities of version 9, it was known that the new neural net was a significant upgrade over the v8. It can now track vehicles and other objects around the car by making better use of the eight cameras around the car. Tesla motors club member Jimmy_d, a deep learning expert has shared his thoughts on v9 and the neural network used in it. Tesla has now deployed a new camera network to handle all 8 cameras. Like V8 the V9 neural network system consists of a set of ‘camera networks’ which process camera output directly. There is a separate set of ‘post processing’ networks that take output from the camera networks and turn it into higher level actionable abstractions. V9 is a pretty big change from V8. Other major changes from V8 to V9 as stated by Jimmy are: Same weight file being used for all cameras (this has pretty interesting implications and previously V8 main/narrow seems to have had separate weights for each camera) Processed resolution of 3 front cameras and back camera: 1280×960 (full camera resolution) Processed resolution of pillar and repeater cameras: 640×480 (1/2×1/2 of camera’s true resolution) all cameras: 3 color channels, 2 frames (2 frames also has very interesting implications) (was 640×416, 2 color channels, 1 frame, only main and narrow in V8) These camera changes mean a much larger neural network that require more processing power. The V9 network takes images in a resolution of 1280x960 with 3 color channels and 2 frames per camera. That’s 1280x960x3x2 as an input which is 7.3MB. The V8 main camera processing frame was 640x416x2 that is, 0.5MB. The v9 camera will have access to more details. About the network size, Jimmy said: “This V9 network is a monster, and that’s not the half of it. When you increase the number of parameters (weights) in an NN by a factor of 5 you don’t just get 5 times the capacity and need 5 times as much training data. In terms of expressive capacity increase it’s more akin to a number with 5 times as many digits. So if V8’s expressive capacity was 10, V9’s capacity is more like 100,000.” Tesla CEO Elon Musk had something to say about the estimates made by Jimmy: https://twitter.com/elonmusk/status/1052101050465808384 The amount of training data doesn’t go up by a mere 5x. It takes at least thousands and even millions of times more data to fully utilize a network that has 5x as many parameters. We will see this new neural network implementation on the road in new cars about six months down the line. For more details, you can view the discussion on the Tesla motors club website. Tesla is building its own AI hardware for self-driving cars Elon Musk reveals big plans with Neuralink DeepMind, Elon Musk, and others pledge not to build lethal AI
Read more
  • 0
  • 0
  • 12801
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-paul-allen-microsoft-co-founder-philanthropist-and-developer-dies-of-cancer-at-65
Natasha Mathur
16 Oct 2018
4 min read
Save for later

Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65

Natasha Mathur
16 Oct 2018
4 min read
Paul Allen, the co-founder of Microsoft, passed away yesterday afternoon from the complications of non-Hodgkin’s lymphoma at the age of 65, in Seattle. Allen was also the chairman and owner of Vulcan Inc., a privately held firm that controlled and looked after Allen’s business, investments, and other philanthropic efforts. A statement was released by Vulcan’s CEO Bill Hilf on behalf of the Vulcan Inc., and the Paul G. Allen network which includes Seattle Seahawks, Portland Trailblazers, Stratolaunch Systems, the Allen Institute and the Allen Institute for Artificial Intelligence: “All of us who had the honor of working with Paul feel an inexpressible loss today. He possessed a remarkable intellect and a passion to solve some of the world’s most difficult problems, with the conviction that creative thinking and new approaches could make a profound and lasting impact. Today we mourn our boss, mentor, and friend whose 65 years were too short – and acknowledge the honor it has been to work alongside someone whose life transformed the world.” Allen had co-founded Microsoft along with Bill Gates, back in 1975. Allen was only 14 years old when he met Gates, who was 12 at the time while attending the Lakeside School outside Seattle. Allen and Gates, both then dropped out of college in June 1975, to pursue their shared passion for computers. It was in 1981 when Allen and Gates, reinvented Q-DOS as MS-DOS and installed it as the operating system for IBM's PC offering, that catapulted Microsoft into its dominant position in the PC industry. Bill Gates released a statement on Allen’s death, as first reported by ABC: https://twitter.com/ABC/status/1052133395923382273 Allen was first diagnosed with Stage 1-A Hodgkin's lymphoma, back in 1982. In 1983, Allen resigned from Microsoft as the executive vice president in research and product development because of these health issues. He then underwent several radiation treatments and his health was restored. Yet another wall hit Allen in 2009 when he further got diagnosed with non-Hodgkin lymphoma. However, Allen successfully managed to beat this cancer diagnosis too. It was only earlier this month when he revealed that he has again started his treatments as the cancer is back. Allen tweeted about cancer: https://twitter.com/PaulGAllen/status/1046864324310982668 Non-Hodgkin lymphoma is uncommon cancer where the affected lymphocytes start multiplying in an abnormal way and begin to collect in certain parts of the lymphatic system, such as the lymph nodes (glands). Microsoft released a statement yesterday, “Microsoft is mourning the passing of Paul Allen, a renowned philanthropist and business leader who co-founded the company more than four decades ago”. Microsoft also tweeted out the statement from Microsoft CEO Satya Nadella: https://twitter.com/Microsoft/status/1051960986385543168 Apart from Microsoft, Tim Cook, CEO, Apple Inc., also tweeted about Allen’s death. https://twitter.com/tim_cook/status/1051976402126270464 Over the course of his lifetime, Allen has achieved many outstanding milestones. He was number 21st on Forbes 400 2018 He was also the world’s 27th richest person on the Bloomberg Billionaires Index, with a net worth of $26.1 billion. He had also received numerous awards in different areas, such as sports, philanthropy, and the arts. Many people from the tech community and other fields have paid tribute to Allen on Twitter: https://twitter.com/QuincyDJones/status/1051970144514134016 https://twitter.com/RMac18/status/1051956977062932480 https://twitter.com/trailblazers/status/1051962568292458496 https://twitter.com/PeteCarroll/status/1051965960830181376 https://twitter.com/710ESPNSeattle/status/1051968617493946368 https://twitter.com/LeoDiCaprio/status/1051994942552375301 https://twitter.com/Seahawks/status/1052077241947942912 Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud
Read more
  • 0
  • 0
  • 10415

article-image-google-chrome-mozilla-firefox-and-others-to-disable-tls-1-0-and-tls-1-1-in-favor-of-tls-1-2-or-later-by-2020
Savia Lobo
16 Oct 2018
2 min read
Save for later

Google Chrome, Mozilla Firefox, and others to disable TLS 1.0 and TLS 1.1 in favor of TLS 1.2 or later by 2020

Savia Lobo
16 Oct 2018
2 min read
Yesterday, Google, Mozilla, and Apple announced that by 2020, they will disable TLS 1.0 and 1.1 by default in their respective browsers. Kyle Pflug, Senior Program Manager for Microsoft Edge said, "January 19th of next year marks the 20th anniversary of TLS 1.0, the inaugural version of the protocol that encrypts and authenticates secure connections across the web." Chrome, Edge, Internet Explorer, Firefox, and Safari already support TLS 1.2 and will soon support recently-approved final version of the TLS 1.3 standard. On the other hand, Chrome and Firefox already support TLS 1.3, while Apple and Microsoft are still working towards supporting TLS 1.3. Why disable TLS 1.0 and 1.1? The Internet Engineering Task Force (IETF), an organization that develops and promotes Internet standards is hosting discussions to formally deprecated both TLS 1.0 and 1.1. TLS provides confidentiality and integrity of data in transit between clients and servers while exchanging information. In order to keep this data safe, it is essential to use modern and highly secures versions of this protocol. The Apple’s Secure Transports team has listed down some benefits of moving away from TLS 1.0 and 1.1 including: Modern cryptographic cipher suites and algorithms with desirable performance and security properties, e.g., perfect forward secrecy and authenticated encryption, that are not vulnerable to attacks such as BEAST. Removal of mandatory and insecure SHA-1 and MD5 hash functions as part of peer authentication. Resistance to downgrade-related attacks such as LogJam and FREAK. For Google Chrome users, Enterprise deployments can preview the TLS 1.0 and 1.1 removal today by setting the SSLVersionMin policy to ‘tls1.2’. For enterprise deployments that need more time, this same policy can be used to re-enable TLS 1.0 or TLS 1.1 until January 2021. Post depreciation here is what each browser maker has promised: TLS 1.0 and 1.1 will be disabled altogether in Chrome 81, which will start rolling out “on early release channels starting January 2020.” Edge and Internet Explorer 11 will disable TLS 1.0 and TLS 1.1 by default “in the first half of 2020.” Firefox will drop support for TLS 1.0 and TLS 1.1 in March 2020. TLS 1.0 and 1.1. will be removed from Safari in updates to Apple iOS and macOS beginning in March 2020. Read more about this news in detail on Internet Engineering Task Force (IETF) blog post. Introducing TLS 1.3, the first major overhaul of the TLS protocol with improved security and speed Let’s Encrypt SSL/TLS certificates gain the trust of all Major Root Programs Java 11 is here with TLS 1.3, Unicode 11, and more updates
Read more
  • 0
  • 0
  • 16769

article-image-chrome-v8-7-0-is-in-beta-to-release-with-chrome-70
Prasad Ramesh
16 Oct 2018
2 min read
Save for later

Chrome V8 7.0 is in beta, to release with Chrome 70

Prasad Ramesh
16 Oct 2018
2 min read
Chrome v8 7.0 is in beta and has added features like embedded built-ins and JavaScript language features. It will be in beta and will release with Chrome 70 Stable. A new branch of v8 is created every six weeks. Just before a Chrome Beta milestone release, each version is branched from the V8 master on GitHub. V8 7.0 was announced yesterday, and is filled with features to enhance the experience of developers. v8 7.0 brings embedded built-ins in Embedded builtins share the generated code across multiple V8 Isolates to save memory. Beginning from V8 v6.9 embedded built-ins are enabled on x64. With the exception of ia32, v8 7.0 brings memory savings to all other platforms. WebAssembly Threads preview WebAssembly enables code compilation of C++ other languages to run on the web. A very useful feature of native applications is the ability to use threads. It is a primitive for parallel computation. pthreads, a standard API for application thread management, is familiar to most C and C++ developers. The WebAssembly Community Group has been working to bring threads to the web in order to enable real multi-threaded applications. V8 has implemented required support for threads in the WebAssembly engine as part of this effort. To use this in Chrome, enable it via chrome://flags/#enable-webassembly-threads, or sign up for an Origin Trial. With Origin Trials developers can experiment with new web features before they are standardized. This also helps the creators gather real-world feedback critical to validate and improve new features. JavaScript features in V8 A description property will be added to Symbol.prototype to provide an ergonomic way of accessing Symbol description. Before this change, the description could be accessed only indirectly through Symbol.protoype.toString(). Array.prototype.sort is now stable in V8 v7.0. V8 used an unstable QuickSort for arrays with more than 10 elements. Now, the stable TimSort algorithm is in use. For more details, visit the v8 Blog. Google announces updates to Chrome DevTools in Chrome 71 Chrome 69 privacy issues: automatic sign-ins and retained cookies; Chrome 70 to correct these Origin DApp: A decentralized marketplace on Ethereum mainnet aims to disrupt gig economy platforms like Airbnb and Uber
Read more
  • 0
  • 0
  • 10207

article-image-we-call-on-the-un-to-invest-in-data-driven-predictive-methods-for-promoting-peace-nature-researchers-on-the-eve-of-views-conference
Sugandha Lahoti
16 Oct 2018
4 min read
Save for later

“We call on the UN to invest in data-driven predictive methods for promoting peace”, Nature researchers on the eve of ViEWS conference

Sugandha Lahoti
16 Oct 2018
4 min read
Yesterday in an article published in Nature, the international journal of science, prominent political researchers, Weisi Guo, Kristian Gleditsch, and Alan Wilson, talked about how artificial intelligence can be used to predict outbursts of violence to potentially save lives and promote peace. This sets the stage for the ongoing two-day ViEWS conference organized by Uppsala University in Sweden, which focuses on Violence Early-Warning Systems. Per their investigation, Governments and international communities can often flag spots, that may become armed violence areas using algorithms that forecast risks. These algorithms are similar to those predicting methods used for forecasting extreme weather. These algorithms estimate the likelihood of violence by extrapolating from statistical data and analyzing text in news reports to detect tensions and military developments. Artificial intelligence is now poised to boost the power of these approaches. Some already working AI systems in this area include Lockheed Martin’s Integrated Crisis Early Warning System, the Alan Turing Institute’s project on global urban analytics for resilient defense which understands the mechanics that cause conflict and the US government’s Political Instability Task Force. The researchers believe Artificial Intelligence will help conflict models make correct predictions. This is because machine learning techniques offer more information about the wider causes of conflicts and their resolution and provide theoretical models that better reflect the complexity of social interactions and human decision-making. How AI and predictive methods could prevent conflicts The article describes how AI systems could prevent conflicts and take necessary actions to promote peace. Broadly the researchers suggest the following measures to improve conflict forecasting: Broaden data collection Reduce unknowns Develop theories Set up a global consortium Ideally, AI systems should be capable of offering explanations for violence and provide strategies for preventing it. However, this may prove to be difficult because conflict is dynamic and multi-dimensional. And the data collected presently is narrow, sparse and disparate. AI systems need to be trained to make inferences. Presently, they learn from existing data, test whether predictions hold, and then refine the algorithms accordingly. This assumes that the training data mirrors the situation being modeled which is not the scenario in the real case and sometimes makes the predictions unreliable. Another important aspect the article describes is modeling complexity. The AI system should decide where it is best to intervene for a peaceful outcome and decide how much intervention is needed. The article also urges conflict researchers to develop a universally agreed framework of theories to describe the mechanisms that cause wars. Such a framework should dictate what sort of data is collected and what needs to be forecast. They have also proposed that an international consortium should be set up to develop formal methods to model the steps society takes to wage war. The consortium should involve academic institutions, international and government bodies and industrial and charity interests in reconstruction and aid work. All research done by the members must use open data and be reproducible and have benchmarks for results. Ultimately their vision for the proposed consortium is to “set up a virtual global platform for comparing AI conflict algorithms and socio-physical models.” They concluded saying, “We hope to take the first steps to agree to a common data and modeling infrastructure at the ViEWS conference workshop on 15–16 October. “ Read the full article on the Nature journal. Google Employees Protest against the use of Artificial Intelligence in Military. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles
Read more
  • 0
  • 0
  • 7390
article-image-the-ember-project-releases-version-3-5-of-ember-js-ember-data-and-ember-cli
Bhagyashree R
16 Oct 2018
3 min read
Save for later

The Ember project releases version 3.5 of Ember.js, Ember Data, and Ember CLI

Bhagyashree R
16 Oct 2018
3 min read
After the release of Ember 3.4 earlier this month, the Ember project released version 3.5 of the three core sub-projects: Ember.js, Ember Data, and Ember CLI. This release boasts of up to 32% performance improvement in Ember CLI build and a new Ember Data which powers the addon developers. This version also kicks off the 3.6 beta cycle for all the three sub-projects. Additionally, Ember 3.4 is now promoted to LTS, which stands for Long Term Support. This means that Ember will continue to receive security updates for 9 release cycles and bug fixes for 6 cycles. Let’s now explore what updates have been added in this release: Updates in Ember.js 3.5 This version is an incremental and backwards compatible release with two small bug fixes. These bug fixes pave the way for new features in future releases. The following bugs are fixed in this release: In some cases Alias wouldn't teardown properly leaving unbalanced watch count in meta. This is now fixed. Naming routes as "array" and "object" is allowed. Updates in Ember Data 3.5 This release hits two milestones: the very first LTS release of ember-data and the RecordData interfaces. RecordData RecordData provides addon developers the much-needed API access with more confidence and stability. This new addition will facilitate developers to easily implement many commonly requested features such as improved dirty-tracking, fragments, and alternative Models in addons. With this new feature added, the Ember developers are thinking of deprecating and removing the use of private but intimate InternalModel API. Also, be warned that this change might cause some regressions in your applications. RecordData use with ModelFragments Most of the community addons work with RecorData versions of ember-data, but currently it does not work with ember-data-model-fragments. In case you are using this addon, it is advisable to stay on ember-data 3.4 LTS until the community has released a version compatible with RecordData. Updates in Ember CLI 3.5 Three new features have been added in this Ember CLI 3.5: Upgraded to Broccoli v2.0.0 Earlier, tools in the Ember Ecosystem relied on a fork of Broccoli. But, from this release, Ember CLI uses Broccoli 2.0 directly. Build speed improvements up to 32% Now, developers will see some speed improvements in their builds, thanks to Broccoli 2! Broccoli 2 allows Ember CLI to use the default system temp directory, instead of a ./tmp directory local to a project folder. Users may see up to 32% improvements in build time, depending on computer hardware. Migrated to ember-qunit As all of the main functionality lives in ember-qunit while ember-cli-qunit is just a very thin shim over ember-qunit, Ember CLI is migrated to ember-qunit. It now uses ember-qunit directly and ultimately ember-cli-qunit would become deprecated. To read the full list of updates, check out the official announcement by the Ember community. The Ember project announces version 3.4 of Ember.js, Ember Data, and Ember CLI Ember project releases v3.2.0 of Ember.js, Ember Data, and Ember CLI Getting started with Ember.js – Part 1
Read more
  • 0
  • 0
  • 12512

article-image-twitter-on-the-gdpr-radar-for-refusing-to-provide-a-user-his-data-due-to-disproportionate-effort-involved
Savia Lobo
16 Oct 2018
3 min read
Save for later

Twitter on the GDPR radar for refusing to provide a user his data due to ‘disproportionate effort’ involved

Savia Lobo
16 Oct 2018
3 min read
After Google just got saved from GDPR’s huge fine last month, Twitter is next on the EU’s GDPR investigation checklist. This appears to be the first GDPR investigation to be opened against Twitter. Last week, the data privacy regulators in Ireland opened up an investigation against Twitter’s data collection practices. This is to analyze the amount of data Twitter receives from its URL-shortening system, t.co. Twitter says the URL shortening allows the platform to measure the number of clicks per link, and helps it to fight the spread of malware through suspicious links. Why did Irish data regulators choose to investigate Twitter? This news was first reported by Fortune stating, “Michael Veale, who works at University College London, suspects that Twitter gets more information when people click on t.co links, and that it might use them to track those people as they surf the web, by leaving cookies in their browsers.” Veale asked Twitter to provide him with all the personal data it holds on him. To which, Twitter refused claiming that providing this information would take a disproportionate effort. Following this, Veale filed a complaint to the Irish Data Protection Commission (DPC), and the authorities opened an investigation last week. In a letter to Veale, the Irish Data Privacy Commissioner wrote, “The DPC has initiated a formal statutory inquiry in respect of your complaint. The inquiry will examine whether or not Twitter has discharged its obligations in connection with the subject matter of your complaint and determine whether or not any provisions of the GDPR or the [Irish Data Protection] Act have been contravened by Twitter in this respect.” The Irish authorities said that Veale’s complaint will be handled by the new European Data Protection Board as Veale’s complaint involves cross-border processing. The EU Data protection body helps national data protection authorities coordinate their GDPR enforcement efforts. Veale also prompted a similar investigation probe into Facebook, which also refused to hand over data held on users’ web-browsing activities. However, Fortune says, “ Facebook was already the subject of multiple GDPR investigations.” Veale says, "Data which looks a bit creepy, generally data which looks like web-browsing history, [is something] companies are very keen to keep out of data access requests. The user has a right to understand." Twitter, however, refused to comment, saying only that it was ‘actively engaged’ with the DPC. If Twitter is found to be in GDPR’s breach list, it could face a fine of up to €20m or up to 4 percent of its global annual revenue. To know more about this news in detail, head over to Fortune’s full coverage. How Twitter is defending against the Silhouette attack that discovers user identity GDPR is good for everyone: businesses, developers, customers The much loved reverse chronological Twitter timeline is back as Twitter attempts to break the ‘filter bubble’
Read more
  • 0
  • 0
  • 12055

article-image-microsoft-fixing-and-testing-the-windows-10-october-update-after-file-deletion-bug
Prasad Ramesh
16 Oct 2018
2 min read
Save for later

Microsoft fixing and testing the Windows 10 October update after file deletion bug

Prasad Ramesh
16 Oct 2018
2 min read
Microsoft started re-releasing the Windows October update last week. The update was halted earlier due to a bug that was deleting user files and folders. After data deletion was reported by multiple users, Microsoft pulled the Windows 10 October update. Microsoft investigated all of the data loss reports and fixed all known issues in the update. It also conducted internal validation and is providing free customer service for affected users. Microsoft is currently rolling out the update to a few called the Windows Insider community. They will carefully study the diagnostics data, the feedback from the tests and from the insiders before general public release. What caused the issue? In the Windows 10 April 2018 Update, users with KFR reported an extra copy of Known Folders on their computer. Code was introduced in the October 2018 Update to remove these empty folders. That change, with another change to the update construction sequence, resulted in the deletion of the original “old” folder locations and their content. The PCs were left only with the new “active” folder. The file deletion issue happened if Known Folder Redirection (KFR) was enabled before the update. KFR is the process of redirecting the known Windows folders like Desktop, Documents, Pictures, Screenshots, Videos etc. from the default folder location to a new folder location. The files were deleted since they remained in the original “old” folder location instead of being moved to the new, redirected location. Further actions The team apologized for any impact these issues had on the users. In the blog John Cable, Director of Program Management, Windows Servicing and Delivery stated: “We will continue to closely monitor the update and all related feedback and diagnostic data from our Windows Insider community with the utmost vigilance. Once we have confirmation that there is no further impact we will move towards an official re-release of the Windows 10 October 2018 Update.” For more details visit the official Microsoft Blog. Microsoft pulls Windows 10 October update after it deletes user files Microsoft Your Phone: Mirror your Android phone apps on Windows .NET Core 2.0 reaches end of life, no longer supported by Microsoft
Read more
  • 0
  • 0
  • 10902
article-image-ibm-launches-industrys-first-cybersecurity-operations-center-on-wheels-for-on-demand-cybersecurity-support
Melisha Dsouza
16 Oct 2018
4 min read
Save for later

IBM launches Industry's first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support

Melisha Dsouza
16 Oct 2018
4 min read
"Having a mobile facility that allows us to bring realistic cyberattack preparation and rehearsal to a larger, global audience will be a game changer in our mission to improve incident response efforts for organizations around the world." -Caleb Barlow, vice president of Threat Intelligence at IBM Security   Yesterday (On 15th October), IBM Security announced the industry's first mobile Security Operations Center- ‘The IBM X-Force Command Cyber Tactical Operations Center’ (C-TOC). This mobile command center hosted at the back of a semi truck will travel around the U.S and Europe for cybersecurity training, preparedness, and response operations. The aim of this project is to provide an on-demand cybersecurity support, while building cybersecurity awareness and skills with professionals, students and consumers. Cybercriminals are getting smarter by the day and cyber crimes are becoming sophisticated by the hour. It is necessary for organizations to plan and rehearse their response to potential security breaches in advance. According to the 2018 Cost of a Data Breach Study, companies that respond to incidents effectively and remediate the event within 30 days can save over $1 million on the total cost of a data breach. Taking this into consideration, the C-TOC has the potential to provide immediate onsite support for clients at times when their cybersecurity needs may arise. The mobile vehicle is modeled after Tactical Operations Centers used by the military and incident command posts used by first responders. It comes with a gesture-controlled cybersecurity "watch floor," data center and conference facilities. It has self-sustaining power, satellite and cellular communications, which will provide a sterile and resilient network for investigation, response and serve as a platform for cybersecurity training. Source: IBM Source: IBM Here are some of the key takeaways that individuals can benefit from, from this mobile Security Operations center: #1 Focus on Response Training and Preparedness The C-TOC will simulate real world scenarios to depict how hackers operate- to help companies train their teams to respond to attacks. The training will cover key strategies to protect business and its resources from cyberattacks. #2 Onsite Cybersecurity Support The C-TOC is mobile and can be deployed as an on-demand Security Operation Center. It aims to provide a realistic cybersecurity experience in the industry while visiting local universities and industries to build interest in cybersecurity careers and to address other cybersecurity concerns. #3 Cyber Best Practices Laboratory The C-TOC training includes real world examples based on experiences with customers in the Cambridge Cyber Range. Attack scenarios will be designed for teams to participate in. The challenges are designed keeping in mind various pointers like: working as a team to mitigate attacks, thinking as a hacker, hands- on experience with a malicious toolset and much more #4 Supplementary Cybersecurity Operations The IBM team also aims to spread awareness on the cybersecurity workforce shortage that is anticipated soon. With an expected shortfall of nearly 2 million cybersecurity professionals by 2022, it is necessary to educate the masses about careers in security as well as help upskill current professionals in cybersecurity. This is one of the many initiatives taken by IBM to bring about awareness about the importance of mitigating cyber attacks in time. Back in 2016, IBM invested $200 million in new incident response facilities, services and software, which included the industry's first Cyber Range for the commercial sector. By real world simulation of cyber attacks and training individuals to come up with advanced defense strategies, the SOC aims to get a realistic cyberattack preparation and rehearsal to a larger, global audience. To know more about this news as well as the dates that the C-TOC will tour the U.S. and Europe, head over to IBM’s official blog. Mozilla announces $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates The Intercept says Google’s Dragonfly is closer to launch than Google would like us to believe U.S Government Accountability Office (GAO) reports U.S weapons can be easily hacked  
Read more
  • 0
  • 0
  • 16389

article-image-google-clouds-titan-and-android-pie-come-together-to-secure-users-data-on-mobile-devices
Sunith Shetty
15 Oct 2018
3 min read
Save for later

Google Cloud’s Titan and Android Pie come together to secure users’ data on mobile devices

Sunith Shetty
15 Oct 2018
3 min read
Barring the Google+ security incident, Google has had an excellent track record for providing exceptional security services to protect different levels of users’ data with ease. Android 9, now aims to provide users with more options to protect user data. To enhance user data security, Android will now be combining Android’s Backup Service and Google Could’s Titan technology to protect data backups while also maintaining the required privacy. Complete backed-up users’ data is essential for rich user experience A lot of time and efforts may be required to create an identity, adding new data, and customizing the users’ settings based on their preferences for an individual app. Whenever the user upgrades to a new device or re-installs the applications, preserving the user data is a must for smooth user experience. A huge chunk of data is generated when using mobile apps, thus adopting proper techniques is necessary to backup the required data. Backing up a small amount of data can be frustrating for users especially when they open the app on a new device. Android Backup service + Titan technology = Secured data backups With Android Pie, devices can now take advantage of a new technique where backed-up application data can only be decrypted using a key. This key is randomly generated at the client. You can encrypt the key using the user’s lock screen PIN/Pattern/passcode, which isn’t known to Google. The password-protected key is encrypted to a Titan security chip on Google Cloud’s datacenter floor. The titan chip is configured in such a way that it will release the backup encryption key only when it is presented with a correct claim derived from the user’s passcode. The titan chip must authorize every access to the decryption key, thus it can permanently block access after too many incorrect attempts at guessing the user’s password. This will mitigate brute force attacks. The number of legal attempts is strictly set by a custom Titan firmware. The number cannot be updated or changed without erasing the contents of the chip. This implies that no one can access the user’s backed-up data without knowing the passcode. Android team hired an external agency for security audit The Android Security & Privacy team hired global cybersecurity and risk mitigation expert NCC Group to complete a security audit, in order to ensure this new technique prevents anyone (including Google) from accessing users application data. The result received positive outcomes around Google’s security design processes, code quality validations, and easing known attack vectors. All these aspects were taken into account prior to launching the service. The engineers corrected some issues quickly which were discovered during the audit. In order to get complete details on how the service fared, you can check the detailed report of NCC Group findings. These external reviews allow Google and Android to maintain transparency and openness which allows users to feel safe about their data, says the Android team. For a complete list of details, you can refer the official Google blog. Read more Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices Facebook says only 29 million and not 50 million users were affected by last month’s security breach Facebook finds ‘no evidence that hackers accessed third party Apps via user logins’, from last week’s security breach
Read more
  • 0
  • 0
  • 15353
Modal Close icon
Modal Close icon