Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-justice-departments-indictment-report-claims-chinese-hackersbreached-business-and-government-network
Savia Lobo
21 Dec 2018
3 min read
Save for later

Justice Department’s indictment report claims Chinese hackers breached business  and government network

Savia Lobo
21 Dec 2018
3 min read
According to an Indictment report from the U.S. Justice Department released on Thursday, the Chinese hackers working on behalf of China’s Ministry of State Security breached the networks of dozens of tech companies and government departments, largely in an effort to steal intellectual property. The report stated that the attacks were being carried out by a group known as APT10, which various security companies have linked to the Chinese state. Speaking to Wired, Benjamin Read, senior manager for cyberespionage analysis at FireEye, said, “MSPs are incredibly valuable targets. They are people that you pay to have privileged access to your network. It’s a potential foothold into hundreds of organizations.” What organizations did the Chinese cybercriminal group target? According to Reuters, hackers successfully targeted Hewlett Packard Enterprise, IBM and both companies customers. In response to the attack, IBM said that it “has been aware of the reported attacks and already has taken extensive counter-measures worldwide as part of our continuous efforts to protect the company and our clients against constantly evolving threats. We take responsible stewardship of client data very seriously, and have no evidence that sensitive IBM or client data has been compromised by this threat.” HPE also responded. The company said in a statement that it had spun out a large managed-services business in a 2017 merger with Computer Sciences Corp that formed a new company, DXC Technology. “The security of HPE customer data is our top priority. We are unable to comment on the specific details described in the indictment, but HPE’s managed services provider business moved to DXC Technology in connection with HPE’s divestiture of its Enterprise Services business in 2017.” The hackers are believed to have used a technique known as spearphishing. This is a highly targeted form of phishing, where a website is disguised as reputable and trustworthy in order to scam the targets. Dmitri Alperovitch, Chief Technology Officer at CrowdStrike, said, “Today’s announcement of indictments against Ministry of State Security (MSS), whom we deem now to be the most active Chinese cyber threat actor, is another step in a campaign that has been waged to indicate to China that its blatant theft of IP is unacceptable and will not be tolerated”. Alperovitch added that “while this action alone will not likely solve the issue and companies in the US, Canada, Europe, Australia, and Japan will continue to be targeted by MSS for industrial espionage, it is an important element in raising the cost and isolating them internationally.” The U.K. government also said, “The National Cyber Security Centre assesses with the highest level of probability that the group widely known as APT10 is responsible for this sustained cyber campaign focused on large-scale service providers. The group almost certainly continues to target a range of global companies, seeking to gain access to commercial secrets.” “China has long rebuffed complaints from other nations accusing it of cyber attacks and espionage but didn’t immediately comment on Thursday’s indictment”, per TechCrunch. Former Senior VP’s take on the Mariott data breach; NYT reports suspects Chinese hacking ties Chinese hackers use snail mails to send malware on board government PCs Chinese company ZTE Corp to assist the Venezuelan government to monitor citizen behavior using ‘Fatherland Card’
Read more
  • 0
  • 0
  • 10458

article-image-google-cloud-releases-a-beta-version-of-sparkr-job-types-in-cloud-dataproc
Natasha Mathur
21 Dec 2018
2 min read
Save for later

Google Cloud releases a beta version of SparkR job types in Cloud Dataproc

Natasha Mathur
21 Dec 2018
2 min read
Google released a beta version of SparkR jobs on Cloud Dataproc, a cloud service that lets you run Apache Spark and Apache Hadoop in a cost-effective manner, earlier this week. SparkR Jobs will build R support on GCP. It is a package that delivers a lightweight front-end to use Apache Spark from R. This new package supports distributed machine learning using MLlib. It can be used to process against large cloud storage datasets and for performing work that is computationally intensive. Moreover, this new package also allows the developers to use “dplyr-like operations” i.e. a powerful R-package, which transforms and summarizes tabular data with rows and columns on datasets stored in Cloud Storage. The R programming language is very efficient when it comes to building data analysis tools and statistical apps. With cloud computing all the rage, even newer opportunities have opened up for developers working with R. Using GCP’s Cloud Dataproc Jobs API, it gets easier to submit SparkR jobs to a cluster without any need to open firewalls for accessing web-based IDEs or SSH onto the master node. With the API, it is easy to automate the repeatable R statistics that users want to be running on their datasets. Additionally, GCP for R also helps avoid the infrastructure barriers that put a limit on understanding data. This includes selecting datasets that need to be sampled due to compute or data size limits. GCP also allows you to build large-scale models that help analyze the datasets of sizes that would previously require big investments in high-performance computing infrastructures. For more information, check out the official Google Cloud blog post. Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more Google Cloud’s Titan and Android Pie come together to secure users’ data on mobile devices
Read more
  • 0
  • 0
  • 18083

article-image-unity-and-baidu-collaborate-for-simulating-the-development-of-autonomous-vehicles
Amrata Joshi
21 Dec 2018
3 min read
Save for later

Unity and Baidu collaborate for simulating the development of autonomous vehicles

Amrata Joshi
21 Dec 2018
3 min read
This week, Unity Technologies, the real-time 3D development platform creator, announced its collaboration with Baidu Inc., China's leading Internet giant for developing a real-time simulation product that creates virtual environments while allowing developers to test autonomous vehicles in real-world situations. This real-time simulation will be available for developers taking part in Baidu’s Apollo platform-- an open and reliable platform working towards the development, testing, and deployment of Levels 3, 4, 5 autonomous vehicles. It includes all the different areas of the self-driving technology spectrum, right from perception and localization to 3D simulation and end-to-end training and testing of autonomous vehicles. The collaboration between Baidu and Unity is expected to gear up the development of a simulation environment for testing autonomous driving software. This simulation will enable developers to digitalize the entire development process. There are advantages of using simulations and virtual environment in development and testing. For instance, risky or implausible scenarios can be generated and tested in simulation whereas it might be impossible to do the same in the real world. Moreover, dangerous situations which can’t be tested in real life can be created and tested using such a simulation. “The ability to accurately conduct autonomous testing in a simulated environment allows for millions of simulations to simultaneously occur, providing Apollo partners with a competitive advantage while helping to keep their business costs down,” said Tim McDonough, general manager of Industrial, Unity Technologies. Unity’s real-time 3D platform helps in reducing errors and risks while increasing efficiency and speed of testing by offering simulations that replicate real-world scenarios. Apart from Baidu, Unity also works the largest OEMs in the world by improving the way they design, build, service and sell automobiles. The company has experts from companies such as BMW,  Toyota, General Motors, Volvo, and the Volkswagen Group. Unity also recently launched SimViz Solution Template, a package that provides OEMs and helps in the building of simulation environments. Jaewon Jung, Chief Architect of Baidu’s Intelligent Driving Group said, “By using a platform like Unity, our developers can focus on testing and research without worrying about non-functional environments or building something from scratch. The Unity-powered game engine simulation for Apollo has the ability to expedite autonomous vehicle validation and training with precise ground truth data in a more effective and safer way.” It would be interesting to see how this collaboration accelerates Baidu’s development of autonomous driving software. Read more about this news, check out Business Wire’ post. Unity ML-Agents Toolkit v0.6 gets two updates: improved usability of Brains and workflow for Imitation Learning Unity 2018.3 is here with improved Prefab workflows, Visual Effect graph and more Unity introduces guiding Principles for ethical AI to promote responsible use of AI
Read more
  • 0
  • 0
  • 15082

article-image-kong-1-0-is-now-generally-available-with-grpc-support-updated-database-abstraction-object-and-more
Amrata Joshi
21 Dec 2018
4 min read
Save for later

Kong 1.0 is now generally available with gRPC support, updated Database abstraction object and more

Amrata Joshi
21 Dec 2018
4 min read
Yesterday, the team at Kong announced the general availability of Kong 1.0, a scalable, fast, open source microservice API gateway that manages hybrid and cloud-native architectures. Kong can be extended through plugins including authentication, traffic control, observability and more.The first stable version of Kong 1.0 was  launched earlier this year in September at the Kong summit. The Kong API  creates a Certificate authority which Kong nodes can use for establishing mutual TLS authentication with each other. It can balance traffic from mail servers and other TCP-based applications, from L7 to L4. What’s new in Kong 1.0? gRPC This release supports gRPC protocol alongwith REST. It is built on top of HTTP/2 and provides option for Kong users looking to connect east-west traffic with low overhead and latency. This helps in enabling Kong users to open more mesh deployments in hybrid environments. New Migrations Framework in Kong 1.0 This version of Kong introduces a new Database Abstraction Object (DAO), a framework that allows migrations from one database schema to another with nearly zero downtime. The new DAO helps users to upgrade their Kong cluster all at once, without the need of any manual intervention for upgrading each node. Plugin Development Kit (PDK) PDK, a set of Lua functions and variables can be used by custom-plugins for implementing logic on Kong. The plugins built with the PDK will be compatible with Kong versions 1.0 and above. PDK’s interfaces are much easier to use than the bare-bones ngx_lua API. It allows users to isolate plugin operations such as logging or caching. It is semantically versioned which helps in maintaining backward compatibility. Service Mesh Support Users can now easily deploy Kong as a standalone service mesh. A service mesh can help address the challenges of microservices in terms of security. It secures the services as it integrates multiple layers of security with Kong plugins. It also features secure communication at every step of the request lifecycle. Seamless Connections This release connects services in the mesh to services across all environments, platforms, and vendors. Kong 1.0 can be used to bridge the gap between cloud-native design and traditional architecture patterns. Robust plugin architecture This release comes with a robust plugin architecture that offers users unparalleled flexibility. Kong plugins provide key functionality and supports integrations with other cloud-native technologies including Prometheus, Zipkin, and many others. Kong’s plugins can now execute code in the new preread phase which improves performance. AWS Lambda and Azure FaaS Kong 1.0 comes with improvements to interactions with AWS Lambda and Azure FaaS, including Lambda Proxy Integration. The Azure Functions plugin can be used to filter out headers disallowed by HTTP/2 when proxying HTTP/1.1 responses to HTTP/2 clients. Deprecations in Kong 1.0 Core The API entity and related concepts such as the /apis endpoint have been removed from this release. Routes and Services are used instead. The old DAO implementation and the old schema validation library are removed. New Admin API Filtering now happens withURL path changes (/consumers/x/plugins) instead of querystring fields (/plugins?consumer_id=x) Error messages have been reworked in this release to be more consistent, precise and informative. The PUT method has been reimplemented.   Plugins The galileo plugin has been removed. Some internal modules, that were used by plugin authors before the introduction of the Plugin Development Kit (PDK) in 0.14.0 have been removed now. Internal modules that have been removed include, kong.tools.ip module, kong.tools.public module and  kong.tools.responses module. Major bug fixes SNIs (Server Name Indication) are now correctly paginated. With this release, null & default values are now handled better. Datastax Enterprise 6.X doesn't throw errors anymore. Several typos, style and grammar fixes have been made. The router doesn’t inject an extra / in certain cases. Read more about this release on Kong’s blog post. Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless Eclipse 4.10.0 released with major improvements to colors, fonts preference page and more Windows Sandbox, an environment to safely test EXE files is coming to Windows 10 next year
Read more
  • 0
  • 0
  • 9604

article-image-windows-server-2019-comes-with-security-storage-and-other-changes
Prasad Ramesh
21 Dec 2018
5 min read
Save for later

Windows Server 2019 comes with security, storage and other changes

Prasad Ramesh
21 Dec 2018
5 min read
Today, Microsoft unveiled new features of Windows Server 2019. The new features are based on four themes—hybrid, security, application platform, and Hyper-Converged Infrastructure (HCI). General changes Windows Server 2019, being a Long-Term Servicing Channel (LTSC) release, includes Desktop Experience. During setup, there are two options to choose from: Server Core installations or Server with Desktop Experience installations. A new feature called System Insights brings local predictive analytics capabilities to Windows Server 2019. This feature is powered by machine learning and aimed to help users reduce operational expenses associated with managing issues in Windows Server deployments. Hybrid cloud in Windows Server 2019 Another feature called the Server Core App Compatibility feature on demand (FOD) greatly improves the app compatibility in the Windows Server Core installation option. It does so by including a subset of binaries and components from Windows Server with the Desktop Experience included. This is done without adding the Windows Server Desktop Experience graphical environment itself. The purpose is to increase the functionality of Windows server while keeping a small footprint. This feature is optional and is available as a separate ISO to be added to Windows Server Core installation. New measures for security There are new changes made to add a new protection protocol, changes in virtual machines, networking, and web. Windows Defender Advanced Threat Protection (ATP) Now, there is a Windows Defender program called Advanced Threat Protection (ATP). ATP has deep platform sensors and response actions to expose memory and kernel level attacks. ATP can respond via suppressing malicious files and also terminating malicious processes. There is a new set of host-intrusion prevention capabilities called the Windows Defender ATP Exploit Guard. The components of ATP Exploit Guard are designed to lock down and protect a machine against a wide variety of attacks and also block behaviors common in malware attacks. Software Defined Networking (SDN) SDN delivers many security features which increase customer confidence in running workloads, be it on-premises or as a cloud service provider. These enhancements are integrated into the comprehensive SDN platform which was first introduced in Windows Server 2016. Improvements to shielded virtual machines Now, users can run shielded virtual machines on machines which are intermittently connected to the Host Guardian Service. This leverages the fallback HGS and offline mode features. There are troubleshooting improvements to shield virtual machines by enabling support for VMConnect Enhanced Session Mode and PowerShell Direct. Windows Server 2019 now supports Ubuntu, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server inside shielded virtual machines. Changes for faster and safer web Connections are coalesced to deliver uninterrupted and encrypted browsing. For automatic connection failure mitigation and ease of deployment, HTTP/2’s server-side cipher suite negotiation is upgraded. Storage Three storage changes are made in Windows Server 2019. Storage Migration Service It is a new technology that simplifies migrating servers to a newer Windows Server version. It has a graphical tool that lists data on servers and transfers the data and configuration to newer servers. Their users can optionally move the identities of the old servers to the new ones so that apps and users don’t have to make changes. Storage Spaces Direct There are new features in Storage Spaces Direct: Deduplication and compression capabilities for ReFS volumes Persistent memory has native support Nested resiliency for 2 node hyper-converged infrastructure at the edge Two-server clusters which use a USB flash drive as a witness Support for Windows Admin Center Display of performance history Scale up to 4 petabytes per cluster Mirror-accelerated parity is two times faster Drive latency outlier detection Fault tolerance is increased by manually delimiting the allocation of volumes Storage Replica Storage Replica is now also available in Windows Server 2019 standard edition. A new feature called test failover allows mounting of destination storage to validate replication or backup data. Performance improvements are made and Windows Admin Center support is added. Failover clustering New features in failover clustering include: Addition of cluster sets and Azure-aware clusters Cross-domain cluster migration USB witness Cluster infrastructure improvements Cluster Aware Updating supports Storage Spaces Direct File share witness enhancements Cluster hardening Failover Cluster no longer uses NTLM authentication Application platform changes in Windows Server 2019 Users can now run Windows and Linux-based containers on the same container host by using the same docker daemon. Changes are being continually done to improve support for Kubernetes. A number of improvements are made to containers such as changes to identity, compatibility, reduced size, and higher performance. Now, virtual network encryption allows virtual network traffic encryption between virtual machines that communicate within subnets and are marked as Encryption Enabled. There are also some improvements to network performance for virtual workloads, time service, SDN gateways, new deployment UI, and persistent memory support for Hyper-V VMs. For more details, visit the Microsoft website. OpenSSH, now a part of the Windows Server 2019 Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019
Read more
  • 0
  • 0
  • 18362

article-image-ieee-computer-society-predicts-top-ten-tech-trends-for-2019-assisted-transportation-chatbots-and-deep-learning-accelerators-among-others
Natasha Mathur
21 Dec 2018
5 min read
Save for later

IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others

Natasha Mathur
21 Dec 2018
5 min read
IEEE Computer Society (IEEE-CS) released its annual tech future predictions, earlier this week, unveiling the top ten most likely to be adopted technology trends in 2019. "The Computer Society's predictions are based on an in-depth analysis by a team of leading technology experts, identify top technologies that have substantial potential to disrupt the market in the year 2019," mentions Hironori Kasahara, IEEE Computer Society President. Let’s have a look at their top 10 technology trends predicted to reach wide adoption in 2019. Top ten trends for 2019 Deep learning accelerators According to IEEE computer society, 2019 will see widescale adoption of companies designing their own deep learning accelerators such as GPUs, FPGAs, and TPUs, which can be used in data centers. The development of these accelerators would further allow machine learning to be used in different IoT devices and appliances. Assisted transportation Another trend predicted for 2019 is the adoption of assisted transportation which is already paving the way for fully autonomous vehicles. Although the future of fully autonomous vehicles is not entirely here, the self-driving tech saw a booming year in 2018. For instance, AWS introduced DeepRacer, a self-driving race car, Tesla is building its own AI hardware for self-driving cars, Alphabet’s Waymo will be launching the world’s first commercial self-driving cars in upcoming months, and so on. Other than self-driving, assisted transportation is also highly dependent on deep learning accelerators for video recognition. The Internet of Bodies (IoB) As per the IEEE computer society, consumers have become very comfortable with self-monitoring using external devices like fitness trackers and smart glasses. With digital pills now entering the mainstream medicine, the body-attached, implantable, and embedded IoB devices provide richer data that enable development of unique applications. However, IEEE mentions that this tech also brings along with it the concerns related to security, privacy, physical harm, and abuse. Social credit algorithms Facial recognition tech was in the spotlight in 2018. For instance, Microsoft President- Brad Smith requested governments to regulate the evolution of facial recognition technology this month, Google patented a new facial recognition system that uses your social network to identify you, and so on.  According to the IEEE, social credit algorithms will now see a rise in adoption in 2019. Social credit algorithms make use of facial recognition and other advanced biometrics that help identify a person and retrieve data about them from digital platforms. This helps them check the approval or denial of access to consumer products and services. Advanced (smart) materials and devices IEEE computer society predicts that in 2019, advanced materials and devices for sensors, actuators, and wireless communications will see widespread adoption. These materials include tunable glass, smart paper, and ingestible transmitters, will lead to the development of applications in healthcare, packaging, and other appliances.   “These technologies will also advance pervasive, ubiquitous, and immersive computing, such as the recent announcement of a cellular phone with a foldable screen. The use of such technologies will have a large impact on the way we perceive IoT devices and will lead to new usage models”, mentions the IEEE computer society. Active security protection From data breaches ( Facebook, Google, Quora, Cathay Pacific, etc) to cyber attacks, 2018 saw many security-related incidents. 2019 will now see a new generation of security mechanisms that use an active approach to fight against these security-related accidents. These would involve hooks that can be activated when new types of attacks are exposed and machine-learning mechanisms that can help identify sophisticated attacks. Virtual reality (VR) and augmented reality (AR) Packt’s 2018 Skill Up report highlighted what game developers feel about the VR world. A whopping 86% of respondents replied with ‘Yes, VR is here to stay’. IEEE Computer Society echoes that thought as it believes that VR and AR technologies will see even greater widescale adoption and will prove to be very useful for education, engineering, and other fields in 2019. IEEE believes that now that there are advertisements for VR headsets that appear during prime-time television programs, VR/AR will see widescale adoption in 2019. Chatbots 2019 will also see an expansion in the development of chatbot applications. Chatbots are used quite frequently for basic customer service on social networking hubs. They’re also used in operating systems as intelligent virtual assistants. Chatbots will also find its applications in interaction with cognitively impaired children for therapeutic support. “We have recently witnessed the use of chatbots as personal assistants capable of machine-to-machine communications as well. In fact, chatbots mimic humans so well that some countries are considering requiring chatbots to disclose that they are not human”, mentions IEEE.   Automated voice spam (robocall) prevention IEEE predicts that the automated voice spam prevention technology will see widespread adoption in 2019. It will be able to block a spoofed caller ID and in turn enable “questionable calls” where the computer will ask questions to the caller for determining if the caller is legitimate. Technology for humanity (specifically machine learning) IEEE predicts an increase in the adoption rate of tech for humanity. Advances in IoT and edge computing are the leading factors driving the adoption of this technology. Other events such as fires and bridge collapses are further creating the urgency to adopt these monitoring technologies in forests and smart roads. "The technical community depends on the Computer Society as the source of technology IP, trends, and information. IEEE-CS predictions represent our commitment to keeping our community prepared for the technological landscape of the future,” says the IEEE Computer Society. For more information, check out the official IEEE Computer Society announcement. Key trends in software development in 2019: cloud native and the shrinking stack Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019
Read more
  • 0
  • 0
  • 18833
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-apple-ups-its-ai-game-promotes-john-giannandrea-as-svp-of-machine-learning
Sugandha Lahoti
21 Dec 2018
2 min read
Save for later

Apple ups it’s AI game; promotes John Giannandrea as SVP of machine learning

Sugandha Lahoti
21 Dec 2018
2 min read
John Giannandrea joined Apple in April 2018, after an 8-year long stint in Google. Yesterday, Apple announced that he has been promoted as the senior vice president of Machine Learning and Artificial Intelligence strategy and moved to the company’s executive team. He will directly report to Apple CEO Tim Cook. “John hit the ground running at Apple and we are thrilled to have him as part of our executive team,” said Tim Cook, Apple’s CEO. “Machine learning and AI are important to Apple’s future as they are fundamentally changing the way people interact with technology, and already helping our customers live better lives. We’re fortunate to have John, a leader in the AI industry, driving our efforts in this critical area”, he added. John will be continuing to look after Apple’s virtual assistant Siri and the Core ML, and Create ML software that developers can use to incorporate artificial intelligence capabilities into their applications. At Google, John was overseeing Google search, along with machine intelligence and research. Apple is facing serious competition from its rivals about incorporating artificial intelligence in their software. Siri, their virtual assistant has been criticized for its shortcomings in comparison to AI offerings from companies like Microsoft, Amazon, and Google. Perhaps this is why Apple has made this move to reorganize its AI teams into a single business unit under John’s leadership and also give him full charge of Siri. https://twitter.com/fromedome/status/1075982956966088704 https://twitter.com/stevekovach/status/1075816852528480256 For additional information, you may visit Apple Newsroom. Apple’s security expert joins the American Civil Liberties Union (ACLU) Apple app store antitrust case to be heard by U.S. Supreme Court today Apple has quietly acquired privacy-minded AI startup Silk Labs, reports Information
Read more
  • 0
  • 0
  • 10752

article-image-fbi-takes-down-some-ddos-for-hire-websites-just-before-christmas
Prasad Ramesh
21 Dec 2018
2 min read
Save for later

FBI takes down some ‘DDoS for hire’ websites just before Christmas

Prasad Ramesh
21 Dec 2018
2 min read
This Thursday a California federal judge granted warrants to the FBI to take down several websites providing DDoS attack services. The domains have been seized by the FBI just before Christmas Holidays. This is a season where hackers have done DDoS attacks in the past. The attacks are mainly targeted towards gaming services like PlayStation Network, Xbox, Steam, EA Online, etc. According to the document, these 15 ‘booter’ websites were taken down: anonsecurityteam.com critical-boot.com defianceprotocol.com ragebooter.come. str3ssed.me bullstresser.net quantumstress.net booter.ninja downthem.org netstress.org Torsecurityteam.org Vbooter.org defcon.pro request.rip layer7-stresser.xyz According to the filed affidavits, three men were charged, Matthew Gatrel, 30 and Juan Martinez, 25 from California; and David Bukoski, 23 from Alaska, for operating the websites. U.K.’s National Crime Agency, Netherlands Police, and the Department of Justice, USA along with companies like Cloudflare, Flashpoint, and Google have made joint efforts for the takedown. This takedown will most likely soon follow with arrests. As per the affidavit, some of these sites were capable of attacks exceeding 40 Gigabits per second (Gbit/s), enough to render some websites dead for a long time. Hackers have stated previously to the Telegraph that the rationale behind attacks on gaming websites on Christmas season is about the holiday spirit. They say that Christmas is not about “children sitting in their rooms and playing games, it is about spending time with their families.” What is a DDoS attack? DDoS attacks have long been a problem dating back to the 70’s. An attacker infects and uses multiple machines to target a network service and flood it with packets of useless data so that legitimate users are denied service. The goal of these attacks is to temporarily make the target services unavailable to its users. This story was initially reported by TechCrunch. Twitter memes are being used to hide malware An SQLite “Magellan” RCE vulnerability exposes billions of apps, including all Chromium-based browsers Hackers are our society’s immune system – Keren Elazari on the future of Cybersecurity
Read more
  • 0
  • 0
  • 4444

article-image-slack-has-terminated-the-accounts-of-some-iranian-users-citing-u-s-sanctions-as-the-reason
Richard Gall
20 Dec 2018
4 min read
Save for later

Slack has terminated the accounts of some Iranian users, citing U.S. sanctions as the reason

Richard Gall
20 Dec 2018
4 min read
Slack has become a mainstay of many industries - when it goes down, you can be sure you'll know about it. However, for a number of users, Slack access appears to have been revoked. Most of these users have an Iranian background. Mahdi Saleh, a PhD student at the Technical University of Munich, explained on Twitter how Slack had terminated his account "in order to comply with export control and economic sanctions laws and regulations promulgated by the U.S. Department of Commerce and the U.S. Department of Treasury." https://twitter.com/mahdimax2010/status/1075524107847000064 Slack responded quickly on Twitter, explaining that "our systems may have detected an account on our platform with an IP address originating from a designated embargoed country." The company then offered to investigate the issue in detail for Saleh. Saleh said that following the exchange he got in touch with Slack, but has not, at the time of writing, heard back from the company. Other Slack users have had their accounts terminated Saleh says that he does not believe his case is unique: "Apparently I am not the only person outside Iran that this happened to" he said. "A lot researchers [sic] and emigrants got the same email from Slack." A quick Twitter search indicates that this is the case, with a number of users sharing the same message from Slack as the one received by Saleh. "I’m a PhD student in Canada with no teammates from Iran!" said Twitter user @a_h_a. "Is Slack shutting down accounts of those ethnically associated with Iran?! And what’s their source of info on my ethnicity?" https://twitter.com/a_h_a/status/1075510422617219077 The same user also called into question whether the reasons given by Slack are true or accurate. "I’m in Canada. No ties to Iran. No teammates in Iran!" "I wonder how my account was associated with my ethnicity and how/where they digged [sic] this info from," he said. Amir Odidi, a software developer at ipinfo.io said that there was "no way to appeal this decision. No way to prove that I'm not living in Iran and not working with Iranians on slack. Nope. Just hello we're banning your account." https://twitter.com/aaomidi/status/1075621119028314112 These aren't the only cases - there are a huge range of other examples of Iranians based in the U.S. and Canada having their accounts terminated. https://twitter.com/nv_rahimi/status/1075695124561125377 https://twitter.com/bamaro_ir/status/1075667601878061056 "Filter coffee, not people", said one user. Slack responds It's hard to say exactly what's going on. We approached Slack to get their perspective; the company provided us with a statement very similar to the one sent to those users who have had their accounts terminated. It reads: “Slack complies with the U.S. regulations related to embargoed countries and regions. As such, we prohibit unauthorized Slack use in Cuba, Iran, North Korea, Syria and the Crimea region of Ukraine. For more information, please see the US Department of Commerce Sanctioned Destinations, The U.S. Department of Treasury website, and the Bureau of Industry and Security website. “Our systems may have detected an account and/or a workspace owner on our platform with an IP address originating from a designated embargoed country. If our systems indicate a workspace primary owner has an IP address originating from a designated embargoed country, the entire workspace will be deactivated. If someone thinks any actions we took were done in error, we will review further.” What does this tell us about how Slack handles user data? With no clear response from Slack, it isn't exactly clear how this happened. You could take Slack at their word, but given the information given by users on Twitter, there does appear to be a piece missing in this puzzle. However, we probably can say that Slack does have an extensive record that allows them to link accounts to specific countries - whether that's via IP address or something else. As one Twitter user wrote, the story suggests that Slack has "more data than some customs and border agencies." https://twitter.com/rakyll/status/1075691304896606208 This article was updated 12.10.2018 10.25am EST to include Slack's response.
Read more
  • 0
  • 0
  • 11699

article-image-uber-to-restart-its-autonomous-vehicle-testing-nine-months-after-the-fatal-arizona-accident
Natasha Mathur
20 Dec 2018
3 min read
Save for later

Uber to restart its autonomous vehicle testing, nine months after the fatal Arizona accident

Natasha Mathur
20 Dec 2018
3 min read
It was back in March this year when a self-driving car by Uber killed a pedestrian, a 49-year-old Elaine Herzberg, in Tempe, Arizona. Uber, who had to halt the on-road testing of its autonomous vehicles after the incident, got the permission granted again to restart the testing yesterday. The authorization letter by the Pennsylvania Department of Transportation (PennDOT) confirmed that Uber will resume its on-road testing of self-driving cars in Pittsburgh. As per the details of the accident’s investigation, Rafaela Vasquez, the backup driver, had looked down at his phone 204 times during a course of a 43-minute test drive. After the accident, Uber had to halt all of its autonomous vehicle testing operations in Pittsburgh, Toronto, San Francisco, and Phoenix. Additionally, a shocking revelation was made last week by an Uber manager, Robbie Millie, who said that he tried to warn the company’s top executives about the danger, a few days before the fatal Arizona accident. According to Robbie Miller, a manager in the testing-operations group, he had sent an email to Uber’s top execs, where he warned them about the dangers related to the software powering Uber’s prototype “robo-taxis”. He also said that he warned them about the human backup drivers in the vehicles who hadn’t been properly trained to do their jobs efficiently. Other than that, Uber recently released its Uber safety report, where the company mentioned that it is committed to “anticipating and managing risks” that come with on-road testing of autonomous vehicles, however, it cannot guarantee to eliminate all of the risks involved. “We are deeply regretful for the crash in Tempe, Arizona, this March. In the hours following, we grounded our self-driving fleets in every city they were operating. In the months since, we have undertaken a top-to-bottom review of ATG’s safety approaches, system development, and culture. We have taken a measured, phased approach to return to on-road testing, starting first with manual driving in Pittsburgh”, said Uber. Although Uber has not released any details on when exactly it will be restarting its AV’s road testing, it says that it will only go back to on-road testing when it has implemented the improved processes. Moving on forward, Uber will make sure to always have two employees at the front seat of its self-driving cars at all times. There’s also going to be an automatic braking system enabled to strictly monitor the safety of the employees within these self-driving cars. Uber’s new family of AI algorithms sets records on Pitfall and solves the entire game of Montezuma’s Revenge Uber announces the 2019 Uber AI Residency Uber posted a billion dollar loss this quarter. Can Uber Eats revitalize the Uber growth story
Read more
  • 0
  • 0
  • 9357
article-image-google-shares-initiatives-towards-enforcing-its-ai-principles-employs-a-formal-review-structure-for-new-projects
Bhagyashree R
20 Dec 2018
3 min read
Save for later

Google shares initiatives towards enforcing its AI principles; employs a formal review structure for new projects

Bhagyashree R
20 Dec 2018
3 min read
Earlier this year, Sundar Pichai shared seven AI principles that Google aims to follow in its work. Google also shared some best practices for building responsible AI. Yesterday, they shared the additional initiative and processes they have introduced to live up to their AI principles. Some of these initiatives include educating people about ethics in technology, introducing a formal review structure for new projects, products, and deals. Educating Googlers on ethical AI Making Googlers aware of the ethical issues: Additional learning material has been added to the Ethics in Technology Practice course that teaches technical and non-technical Googlers about how they can address the ethical issues that arise while at work. In the future, Google is planning to make this course accessible for everyone across the company. Introducing AI Ethics Speaker Series: This series features external experts across different countries, regions, and professional disciplines. So far, eight sessions have been conducted with 11 speakers covering topics from bias in natural language processing (NLP) to the use of AI in criminal justice. AI fairness: A new module on fairness is added to Google’s free Machine Learning Crash Course. This course is available in 11 languages and is being used by more than 21,000 Google employees. The fairness module explores different types of human biases that can corp in the training data and also provide strategies to identify and evaluate their effects. Review structure for new projects, products, and deals Google has employed a formal review structure to check the scaling, severity, and likelihood of best- and worst-case scenarios of new projects, products, and deals. This review structure consists of three core groups: Innovation team: This team consists of user researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, and legal experts. They are responsible for day-to-day operations and initial assessments. Senior experts: This group consists of senior experts from a range of disciplines across Alphabet Inc.. They provide technological, functional, and application expertise. Council of senior executives: This group handles the decisions that affect multiple products and technologies. Currently, more than 100 reviews have been done under this formal review structure. In the future, Google plans to create an external advisory group, which will comprise of experts from a variety of disciplines. To read more about Google’s initiatives towards ethical AI, check out their official announcement. Google won’t sell its facial recognition technology until questions around tech and policy are sorted Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google kills another product: Fusion tables
Read more
  • 0
  • 0
  • 13722

article-image-eclipse-4-10-0-released-with-major-improvements-to-colors-fonts-preference-page-and-more
Amrata Joshi
20 Dec 2018
3 min read
Save for later

Eclipse 4.10.0 released with major improvements to colors, fonts preference page and more

Amrata Joshi
20 Dec 2018
3 min read
Yesterday, the team at Eclipse release Eclipse 4.10.0, SDK project. Eclipse 4.10.0 is a part of Eclipse IDE 2018-12. This release features improved views, options, dialogs,Java editor and more. https://twitter.com/EclipseJavaIDE/status/1075422538484846597 Improvements to Eclipse 4.10.0 Views and dialogs The Quick Switch Editor (Ctrl+E) dialog for editor selection has been improved and now shows the path of the resource along with its filename. In Eclipse 4.10.0, the Workspace selection dialog shows completion proposals for making the process of picking a workspace with the keyboard easier. It is now possible to convert a plug-in project to a modular project by selecting the Configure > Create module-info.java context menu. This creates the module-info.java file for the project. Colors, Fonts preference page The Colors and Font preference page has been updated and it now supports searching for font, font height, and font style. The search has been updated allowing the users to quickly see where a font is used or where a specific style or size is used. This release comes with a new option that disables API analysis builder on the Plug-in Development preference page. Chevron button for hidden tabs The chevron button now shows the number of tabs that are hidden. It doesn't have transparency artifacts anymore, which makes it more readable especially in the dark theme. Added support for custom URL schemes in Eclipse 4.10.0 This release can handle custom URL schemes such as https, ssh, and git. When a user clicks on a link with a specific custom URL scheme, Eclipse first starts and then handles the clicked link. Users can now control the URL schemes that should be handled by the current Eclipse installation via General > Link Handlers preference page. ContentAssistant class The ContentAssistant class now allows consumers to configure whether the completion proposal trigger characters are honored or ignored. If ContentAssistant.enableCompletionProposalTriggerChars(false) is used, then completion proposal trigger characters are ignored and the users have to press the Enter key to trigger insertion. If ContentAssistant.enableCompletionProposalTriggerChars(true) is used, then completion proposal trigger characters can be used alongwith the Enter key to insert the proposal. If the enableCompletionProposalTriggerChars(boolean) method is not called, then the default behaviour is equivalent to calling enableCompletionProposalTriggerChars(true) so that extra trigger characters are honored. Java Editor Eclipse 4.10.0 comes with a quick fix Change project compliance and JRE to 11. This release comes with a quick assist, that allows adding var type to lambda parameters. This quick assist will only be available if the project compliance is Java 11 or above. An option to set compiler compliance to 11 on a Java project is now available. With this release, Java editor now shows the number of implementations and references for a Java element as decorative text (Code Minings) above the element. Read more about this news on Eclipse’ blog. Eclipse IDE’s Photon release will support Rust Will Ethereum eclipse Bitcoin? Apache Maven and m2eclipse
Read more
  • 0
  • 0
  • 12219

article-image-ionic-v4-rc-released-with-improved-performance-ui-library-distribution-and-more
Amrata Joshi
20 Dec 2018
3 min read
Save for later

Ionic v4 RC released with improved performance, UI Library distribution and more

Amrata Joshi
20 Dec 2018
3 min read
Yesterday, the team at Ionic released the final candidate version for Ionic v4. The final version of Ionic v4 is expected to be released in early 2019. Ionic is the app platform for web developers for building mobile, web, and desktop apps with one shared code base and open web standards. Ionic v4 RC has been nicknamed Ionic Neutronium following the earlier RC versions Ionic 4.1 Hydrogen, Ionic 4.2 Helium, 4.3 Lithium, etc. With the final release candidate, the API has now been stabilized along with fixing patch and minor releases. Improvements in  Ionic v4 RC? Mobile performance Ionic v4 will come with improvement in app startup times, especially on mobile devices. It comes with smaller file sizes that work well with apps on iOS, Android, Electron, and PWAs.                                                         Source: Ionic Ivy Renderer for Angular users Ionic v4 will be shipped with Angular Ivy Renderer. It is Angular’s fastest and smallest renderer so far and will prove to be useful for Angular and Ionic developers. Interestingly, even a simple Ivy “Hello World” app reduces down to a size of 2.7kb. UI Library Distribution Ionic Neutronium has made improvements to the UI library. It features the use of standard Web APIs like custom elements that are capable of lazy-loading themselves on-demand. Now both Angular and Ionic can iterate and improve independently, and developers can take advantage of these improvements with fewer restrictions. Support for Angular tooling Angular CLI and Router both have become production ready with this release and are  capable of native-style navigation required by Ionic apps. This release also comes with an added support for Angular schematics, so Angular developers can run ng add @ionic/angular to add Ionic directly to their app. Angular, React, and Vue Ionic v4 RC allows users to continue using Ionic in their projects based on React or Vue. The goal behind this release was to decouple Ionic from any specific version of one single framework’s runtime and component model. Last month, Ionic community and Modus Create released the alpha version of @ionic/vue. @ionic/react is in progress and is expected to release soon. Major bug fixes The issues with scrollable options have been fixed. The Cordova browser error has been fixed. Fixes have been made to sibling router-outlets and router-outlet memory. There is an improvement in progress-bar as it looks better now. Issues with virtual-scroll have been fixed. Many users are happy with this release and awaiting the final version in 2019. This might prove to be the perfect new year celebration for the developers as well as the Ionic team. Read more about this releases on Ionic’s blog post. JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript Ionic framework announces Ionic 4 Beta Ionic Components  
Read more
  • 0
  • 0
  • 14245
article-image-windows-sandbox-an-environment-to-safely-test-exe-files-is-coming-to-windows-10-next-year
Prasad Ramesh
20 Dec 2018
2 min read
Save for later

Windows Sandbox, an environment to safely test EXE files is coming to Windows 10 next year

Prasad Ramesh
20 Dec 2018
2 min read
Microsoft will be offering a new tool called Windows Sandbox next year with a Windows 10 update. Revealed this Tuesday, it provides an environment to safely test EXE applications before running them on your computer. Windows sandbox features Windows Sandbox is an isolated desktop environment where users can run untrusted software without any risk of them having any effects on your computer. Any application you install in Windows Sandbox is contained in the sandbox and cannot affect your computer. All software with their files and state are permanently deleted when a Windows Sandbox is closed. You need Windows 10 Pro or Windows 10 Enterprise to use it and will be shipped with an update, no separate download needed. Every run of Windows Sandbox is new and runs like a fresh installation of Windows. Everything is deleted when you close Windows Sandbox. It uses hardware-based virtualization for kernel isolation based on Microsoft’s hypervisor. A separate kernel isolates it from the host machine. It has an integrated kernel scheduler and virtual GPU. Source: Microsoft website Requirements In order to use this new feature based on Hyper-V, you’ll need, AMD64 architecture, virtualization capabilities enabled in BIOS, minimum 4GB RAM (8GB recommended), 1 GB of free disk space (SSD recommended), and dual-core CPU (4 cores with hyperthreading recommended). What are the people saying The general sentiment towards this release is positive. https://twitter.com/AnonTechOps/status/1075509695778041857 However, a comment on Hacker news suggests that this might not be that useful for its intended purpose: “Ironically, even though the recommended use for this in the opening paragraph is to combat malware, I think that will be the one thing this feature is no good at. Doesn’t even moderately sophisticated malware these days try to detect if it’s in a sandbox environment? A fresh-out-of-the-box Windows install must be a giant red flag for that.” Meanwhile, if you’re on Windows 7 or Windows 8, you can try Sandboxie. For more technical details under the hood of Sandbox, visit the Microsoft website. Oracle releases VirtualBox 6.0.0 with improved graphics, user interface and more Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency” Are containers the end of virtual machines?
Read more
  • 0
  • 0
  • 17155

article-image-stanford-researchers-introduce-deepsolar-a-deep-learning-framework-that-mapped-every-solar-panel-in-the-us
Bhagyashree R
20 Dec 2018
3 min read
Save for later

Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US

Bhagyashree R
20 Dec 2018
3 min read
Yesterday, researchers from Stanford University introduced DeepSolar, a deep learning framework that analyzes satellite images to identify the GPS location and size of solar panels. Using this framework they have built a comprehensive database containing all the GPS locations and sizes of solar installations in the US. The system was able to identify 1.47 million individual solar installations across the United States, ranging from small rooftop configurations, solar farms, to utility-scale systems. The DeepSolar database is available publicly to aid researchers to extract further insights into solar adoption. This database will also help policymakers in better understanding the correlation between solar deployment and socioeconomic factors such as household income, population density, and education level. How DeepSolar works? DeepSolar uses transfer learning to train a CNN classifier on 366,467 images. These images are sampled from over 50 cities/towns across the US with merely image-level labels indicating the presence or absence of panels. One of the researchers, Rajagopal explained the model to Gizmodo, “The algorithm breaks satellite images into tiles. Each tile is processed by a deep neural net to produce a classification for each pixel in a tile. These classifications are combined together to detect if a system—or part of—is present in the tile.” The deep neural net then identifies which tile is a solar panel. Once the training is complete, the network produces an activation map, which is also known as a heat map. The heat map outlines the panels, which can be used to obtain the size of each solar panel system. Rajagopal further explained how this model gives better efficiency, “A rooftop PV system typically corresponds to multiple pixels. Thus even if each pixel classification is not perfect, when combined you get a dramatically improved classification. We give higher weights to false negatives to prevent them.” What are some of the observations the researchers made? To measure its classification performance the researchers defined two metrics: utilize precision and recall. Utilize precision is the rate of correct decisions among all positive decisions and recall is the ratio of correct decisions among all positive samples. DeepSolar was able to achieve a precision of 93.1% with a recall of 88.5% in residential areas and a precision of 93.7% with a recall of 90.5% in non-residential areas. To measure its size estimation performance they calculated the mean relative error (MRE). It was recorded to be 3.0% for residential areas and 2.1% for non-residential areas for DeepSolar. Future work Currently, the DeepSolar database only covers the contiguous US region. The researchers are planning to expand its coverage to include all of North America, including remote areas with utility-scale solar, and non-contiguous US states. Ultimately, it will also cover other countries and regions of the world. Also, DeepSolar only estimates the horizontal projection areas of solar panels from satellite imagery. In the future, it would be able to infer high-resolution roof orientation and tilt information from street view images. This will give a more accurate estimation of solar system size and solar power generation capacity. To know more in detail, check out the research paper published by Ram Rajagopal et al: DeepSolar: A Machine Learning Framework to Efficiently Construct a Solar Deployment Database in the United States. Introducing remove.bg, a deep learning based tool that automatically removes the background of any person based image within 5 seconds NeurIPS 2018: How machine learning experts can work with policymakers to make good tech decisions [Invited Talk] NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase
Read more
  • 0
  • 0
  • 17064
Modal Close icon
Modal Close icon