Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-amazon-consumer-business-migrated-to-redshift-with-plans-to-move-88-of-its-oracle-dbs-to-aurora-and-dynamodb-by-year-end
Natasha Mathur
12 Nov 2018
3 min read
Save for later

Amazon consumer business migrated to Redshift with plans to move 88% of its Oracle DBs to Aurora and DynamoDB by year end

Natasha Mathur
12 Nov 2018
3 min read
Amazon is getting quite close to moving away from Oracle. Andy Jassy, CEO, Amazon Web Services, tweeted last week regarding turning off the Oracle data warehouse and moving to RedShift. Jassy’s recent tweet seems to be a response to Oracle’s CTO, Larry Ellison’s constant taunts and punch lines. https://twitter.com/ajassy/status/1060979175098437632 The news about Amazon making its shift from Oracle stirred up in January this year. This was followed by the CNBC report this August which talked about Amazon’s plans to move from Oracle by 2020. As per the report, Amazon had already started to migrate most of its infrastructure internally to Amazon Web services. The process to move from Oracle, however, has been a bit harder than expected for Amazon. It faced an outage in one of its biggest warehouses on Prime Day (one of the Amazon’s biggest sales day in a year), last month, as reported by CNBC. The major cause of the outage was Amazon’s migration from Oracle’s database to its own technology, Aurora PostgreSQL. Moreover, Amazon and Oracle have had regular word battles in recent years over the performance of their database software and cloud tools. For instance, Larry Ellison, CTO, Oracle, slammed Amazon as he said, “Let me tell you an interesting fact: Amazon does not use [Amazon web services] to run their business. Amazon runs their entire business on top of Oracle, on top of the Oracle database. They have been unable to migrate to AWS because it’s not good enough.” Larry Ellison also slammed Amazon during Oracle OpenWorld conference last year saying, “Oracle’s services are just plain better than AWS” and how Amazon is “one of the biggest Oracle users on Planet Earth”. “Amazon's Oracle data warehouse was one of the largest (if not THE largest) in the world. RIP. We have moved on to newer, faster, more reliable, more agile, more versatile technology at more lower cost and higher scale. #AWS Redshift FTW.” tweeted Werner Vogels, CTO, Amazon. Public reaction to this decision by Amazon has been largely positive with people supporting Amazon’s decision to migrate from Oracle: https://twitter.com/eonnen/status/1061082419057442816 https://twitter.com/adamuaa/status/1061094314909057024 https://twitter.com/nayar_amit/status/1061154161125773312 Oracle makes its Blockchain cloud service generally available Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer
Read more
  • 0
  • 0
  • 3287

article-image-free-software-foundation-updates-their-licensing-materials-adds-commons-clause-and-fraunhofer-fdk-aac-license
Sugandha Lahoti
12 Nov 2018
3 min read
Save for later

Free Software Foundation updates their licensing materials, adds Commons Clause and Fraunhofer FDK AAC license

Sugandha Lahoti
12 Nov 2018
3 min read
Last week, the free software foundation updated the list of their licensing materials. They have added two licenses to their list of various Licenses including the Commons Clause, and the Fraunhofer FDK AAC license. They have also updated their article on license compatibility and relicensing and added a new entry to the frequently asked questions about the GNU Licenses. Commons Clause Commons Clause is added to their list of non-free licenses. This license is added to an existing free license to prevent using the work commercially, rendering the work nonfree. By making commons clause as non-free, FSF recommends users to fork software using it. So, if a previously existing project that was under a free license adds the Commons Clause, users should work to fork that program and continue using it under the free license. If it isn't worth forking, users should simply avoid the package. This move by FSF sparked a controversy that the Commons Clause piggybacks on top of existing free software licenses and thus could mislead users to think that software using it is free software when it's, in fact, proprietary by their definitions. However, others found the combination of a free software license + Commons Clause to be very compelling. A hacker news user pointed out, “I'm willing to grant to the user every right offered by free software licenses with the exception of rights to commercial use. If that means my software has to be labeled as proprietary by the FSF, so be it, but at the same time I'd prefer not to mislead users into thinking my software is being offered under a vanilla free software license.” Another said, “I don't know there is any controversy as such. The FSF is doing its job and reminding everyone that freedom includes the freedom to make money. If your software is licensed under something that includes the Commons Clause then it isn't free software, because users are not free to do what they want with it.” The Fraunhofer FDK AAC license FSF has also added the Fraunhofer FDK AAC license to their list of licenses. This is a free license, incompatible with any version of the GNU General Public License (GNU GPL). However, it comes with an advice of caution. While the Fraunhofer provides a copyright license, it explicitly declines to grant any patent license. In fact, it directs users to contact them to obtain a patent license. Users should act with caution in determining whether they feel comfortable using works under this license. Other changes FSF has also added a new section to their article on License Compatibility and Relicensing, addressing combinations of code. This section was announced in September and helps users in simplifying the picture when dealing with a project that combines code under multiple compatible licenses. They have also added a new entry to their FAQs. It explains what the GNU GPL says about translating code into another programming language. Read more about the news on FSF Blog. Is the ‘commons clause’ a threat to open source? GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation Lerna relicenses to ban major tech giants like Amazon, Microsoft, Palantir from using its software as a protest against ICE
Read more
  • 0
  • 0
  • 10469

article-image-amazon-addresses-employees-dissent-regarding-the-companys-law-enforcement-policies-at-an-all-staff-meeting-in-a-first
Savia Lobo
09 Nov 2018
3 min read
Save for later

Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first

Savia Lobo
09 Nov 2018
3 min read
Yesterday, at an Amazon all-staff meeting, the company addressed its relationship with law enforcement agencies. This action is in response to the employee concerns raised in June about the company’s frequent successful attempts to provide cloud infrastructure and facial recognition software for the government authorities (including Immigrations Customs and Law Enforcement). This was the very first Amazon all-staff meeting and was live streamed globally. When asked about what is being done in response to the concerns voiced by both Amazon employees and civil rights groups, Andy Jassy, CEO of Amazon Web Services, said, “There’s a lot of value being enjoyed from Amazon Rekognition. Now now, of course, with any kind of technology, you have to make sure that it’s being used responsibly, and that’s true with new and existing technology. Just think about all the evil that could be done with computers or servers and has been done, and you think about what a different place our world would be if we didn’t allow people to have computers.” According to Buzzfeed, questions for the meeting were pre-screened and with no opportunity for questions. Last year, Amazon faced controversy over some uses of its AI-powered facial recognition product, Rekognition. Its use cases range from being used to monitor faces in group photos, crowded events and public places such as airports, and run those images for matches against mugshot databases. In June, hundreds of Amazon employees signed a letter titled 'We Won’t Build It', an open letter to CEO Jeff Bezos asking Amazon to stop selling Rekognition to the police, citing “historic militarization of police, renewed targeting of Black activists, and the growth of a federal deportation force currently engaged in human rights abuses”. The employee letter states, “Our company should not be in the surveillance business; we should not be in the policing business; we should not be in the business of supporting those who monitor and oppress marginalized populations.” The workers also pointed out Amazon’s commercial relationship with the data firm Palantir, which does business with the U.S. Immigration and Customs Enforcement. According to the public documents obtained by the Project on Government Oversight, “Amazon also pitched its facial recognition technology directly to the ICE, a few months after the federal immigration agency started enforcing President Trump’s controversial zero-tolerance family-separation border policy.” The American Civil Liberties Union(ACLU) also raised concerns on Amazon Rekognition’s misuse for racial profiling. This issue was identified after the organization ran a test and found that the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime and that the false matches disproportionately involved people of color, including six members of the Congressional Black Caucus. Jeff Bezos, at a Wired conference last month, stated, “If big tech companies are going to turn their back on the U.S. Department of Defense, this country is going to be in trouble.” To know more about this news in detail, head over to the complete Q&A of the meeting on BuzzFeed. Apple and Amazon take punitive action against Bloomberg’s ‘misinformed’ hacking story ‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs Amazon tried to sell its facial recognition technology to ICE in June, emails reveal
Read more
  • 0
  • 0
  • 12458

article-image-a-microsoft-windows-bug-deactivates-windows-10-pro-licenses-and-downgrades-to-windows-10-home-users-report
Savia Lobo
09 Nov 2018
2 min read
Save for later

A Microsoft Windows bug deactivates Windows 10 Pro licenses and downgrades to Windows 10 Home, users report

Savia Lobo
09 Nov 2018
2 min read
Yesterday, Microsoft users reported that a bug has affected the Microsoft Windows activation service which causes the Windows 10 Pro licenses to be downgraded to Windows 10 Home, says Bleepingcomputer. Following this, the bug flashes a message on user’s screen stating that their license is not activated and prompting the user to troubleshoot the problems. Windows 10 Pro not activated Source: Bleeping Computers Troubleshooting Completed Source: Bleeping Computers According to a Reddit post, "Microsoft has just released an Emerging issue announcement about current activation issue related to Pro edition recently. This happens in Japan, Korea, American, and many other countries. I am very sorry to inform you that there is a temporary issue with Microsoft's activation server at the moment and some customers might experience this issue where Windows is displayed as not activated. Our engineers are working tirelessly to resolve this issue and it is expected to be corrected within one to two business days." Jeff Jones, Sr. Director, Microsoft., told BleepingComputer, “We’re working to restore product activations for the limited number of affected Windows 10 Pro customers.” He added, “A limited number of customers experienced an activation issue that our engineers have now addressed. Affected customers will see a resolution over the next 24 hours as the solution is applied automatically. In the meantime, they can continue to use Windows 10 Pro as usual.” To know more about this news, head over to Bleeping Computers. Microsoft announces .NET standard 2.1 Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report
Read more
  • 0
  • 0
  • 9630

article-image-openai-launches-spinning-up-a-learning-resource-for-potential-deep-learning-practitioners
Prasad Ramesh
09 Nov 2018
3 min read
Save for later

OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners

Prasad Ramesh
09 Nov 2018
3 min read
OpenAI released Spinning Up yesterday. It is an educational resource for anyone who wants to become a skilled deep learning practitioner. Spinning Up has many examples in reinforcement learning, documentation, and tutorials. The inspiration to build Spinning Up comes from OpenAI Scholars and Fellows initiatives. They observed that it’s possible for people with little-to-no experience in machine learning to rapidly become practitioners with the right guidance and resources. Spinning Up in Deep RL is also integrated into the curriculum for OpenAI 2019 cohorts of Scholars and Fellows. A quick overview of Spinning Up course content A short introduction to reinforcement learning. What is it? The terminology used, different types of algorithms and basic theory to develop an understanding. An essay that lays out points and requirements to grow into a reinforcement learning research role. It explains the background, practice learning, and developing a project. A list of important research papers organized by topic for learning. A well-documented code repository of short, standalone implementations of various algorithms. These include Vanilla Policy Gradient (VPG), Trust Region Policy Optimization (TRPO), Proximal Policy Optimization (PPO), Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), and Soft Actor-Critic (SAC). And finally, a few exercises to solve and start applying what you’ve learned. Support plan for Spinning Up Fast-paced support period For the first three weeks after release OpenAI will quickly work on bug-fixes, installation issues, and resolving errors in the docs. They will work to streamline the user experience so that it as easy as possible to self-study with Spinning Up. A major review in April 2019 Around April next year, OpenAI will perform a serious review of the state of package based on feedback received from the community. After that any plans for future modification will be announced. Public release of internal development On making changes to Spinning Up in Deep RL with OpenAI Scholars and Fellows, the changes will also be pushed to the public repository so that it is available to everyone immediately. In Spinning Up, running deep reinforcement learning algorithms is as easy as: python -m spinup.run ppo --env CartPole-v1 --exp_name hello_world For more details on Spinning Up, visit the OpenAI Blog. This AI generated animation can dress like humans using deep reinforcement learning Curious Minded Machine: Honda teams up with MIT and other universities to create an AI that wants to learn MIT plans to invest $1 billion in a new College of computing that will serve as an interdisciplinary hub for computer science, AI, data science
Read more
  • 0
  • 0
  • 17351

article-image-australias-facial-recognition-and-identity-system-can-have-chilling-effect-on-freedoms-of-political-discussion-the-right-to-protest-and-the-right-to-dissent-the-guardian-r
Bhagyashree R
09 Nov 2018
5 min read
Save for later

Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report

Bhagyashree R
09 Nov 2018
5 min read
On Wednesday, The Guardian reported that various civil rights groups and experts are warning that using the near real-time matching of citizens’ facial images risks a profound chilling effect on protest and dissent. The facial recognition system is capable of rapidly matching pictures of people captured on CCTV with their photos stored in government records to detect criminals and identity theft. What is this facial recognition and identity system? Last year in October, the Australian government agreed on establishing a National Facial Biometric Matching Capability and signed an Intergovernmental Agreement on Identity Matching Services. This system was aimed to make it easier for security and law enforcement agencies to identify suspects or victims of terrorism or other criminal activities and to combat identity crime. Under this agreement, agencies in all jurisdiction are allowed to use this new face matching service to access passport, visa, citizenship, and driver license images. The systems consist of two parts: Face Verification Service (FVS): This is a one-to-one, image-based verification service that matches a person’s photo against an image on one of the government records to help verify their identity. Face Identification Service (FIS): Unlike FVS, this is a one-to-many, image-based identification service that matches a photo of an unknown person against multiple government records to help to identify the person. What are some concerns the system poses? Since its introduction, the facial recognition and identity system has raised major concerns among academics, privacy experts, and civil rights groups. This system records and processes citizens’ sensitive biometric information regardless of whether they have committed or are suspected of an offense. In a submission to the Parliamentary Joint Committee on Intelligence and Security, Professor Liz Campbell of Monash University points out that “the capability” breaches privacy rights. This system allows the collection, storage, and sharing of personal details from people who are not even suspected of an offense. According to Campbell, the facial recognition and identity system also prone to errors: "Research into identity matching technology indicates that ethnic minorities and women are misidentified at higher rates than the rest of the population.” On investigating FBI’s facial recognition and identity system, the US full house committee on oversight and government reform also found that the system has some inaccuracies: “Facial recognition technology has accuracy deficiencies, misidentifying female and African American individuals at a higher rate. Human verification is often insufficient as a backup and can allow for racial bias.” These inaccuracies are often because of the underlying algorithms, which are capable of identifying people who look more like its creators. For instance, in the British and Australian context, it is good at identifying white men. In addition to these inaccuracies, there are also concerns about the level of access given to private corporations and the legislation’s loose wording, which could allow it to be used for purposes other than combating criminal activities. Lesley Lynch, the deputy president of NSW Council for Civil Liberties believes that these systems will have an ill effect on our freedom of political discussion: “It’s hard to believe that it won’t lead to pressure, in the not too distant future, for this capability to be used in many contexts, and for many reasons. This brings with it a real threat to anonymity. But the more concerning dimension is the attendant chilling effect on freedoms of political discussion, the right to protest and the right to dissent. We think these potential implications should be of concern to us all.” What the supporters are saying? Despite these concerns, New South Wales is in favor of the capability and is legislating to allow state driver’s licenses to be shared with the commonwealth and investing $52.6m over four years to facilitate its rollout. Samantha Gavel, the NSW’s privacy commissioner said that the facial recognition and identity system has been designed with “robust” privacy safeguards. Gavel said that the system is developed in consultation with state and federal privacy commissioners, and she expressed confidence in the protections limiting access by private corporations: “I understand that entities will only have access to the system through participation agreements and that there are some significant restraints on private sector access to the system.” David Elliott, NSW Minister for Counter-Terrorism said that the system will help prevent identity theft and there will be a limit to its use. Mr. Elliott said in state parliament: "People will not be charged for jaywalking just because their facial biometric information has been matched by law enforcement agencies. The Government will make sure that members of the public who have a driver license are well and truly advised that this information and capability will be introduced as part of this legislation. I am an avid libertarian when it comes to freedom from government interference and [concerns] have been forecasted and addressed in this legislation." To read the full story head over to The Guardian’s official website. Google’s new facial recognition patent uses your social network to identify you! Amazon tried to sell its facial recognition technology to ICE in June, emails reveal Emotional AI: Detecting facial expressions and emotions using CoreML [Tutorial]
Read more
  • 0
  • 0
  • 12231
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-introduces-ai-hub-kubeflow-pipeline-and-cloud-tpu-to-make-artificial-intelligence-more-accessible-to-businesses
Melisha Dsouza
09 Nov 2018
4 min read
Save for later

Google introduces AI Hub, Kubeflow pipeline and Cloud TPU to make Artificial Intelligence more accessible to businesses

Melisha Dsouza
09 Nov 2018
4 min read
Google is taking yet another step to make its artificial intelligence technology more accessible across a range of industries. Yesterday, in a blog post, Google’s Director of product management for Cloud AI, Rajen Sheth introduced a host of tools to “put AI in reach of all businesses”. He stated that even though  the company has more than 15,000 paying customers using its AI services, it’s not enough. The upgrades will make AI simpler, useful and fast for increased adoption among businesses. Here are the tools released by Google: #1 The AI Hub to make AI simpler! Released in alpha, the AI Hub is a “one-stop destination for plug-and-play ML content” which includes pipelines, Jupyter notebooks, TensorFlow modules, and more. The AI Hub is launched with a motive to combat the scarcity of ML knowledge in the workforce. This will overcome the challenge for organizations to build comprehensive resources using their ML knowledge. It aims to make high-quality ML resources developed by Google Cloud AI, Google Research and other teams across Google publicly available to all businesses. The Hub will also provide a  private, secure hub where enterprises can upload and share ML resources within their own organizations. This will help businesses to reuse pipelines and deploy them to production in GCP or on hybrid infrastructures using the Kubeflow Pipeline system with just a few steps. In the beta release, Google plans to expand the type of assets made available through the AI Hub, which includes public contributions from third-party organizations and partners. #2 Kubeflow Pipelines, API updates for video to make AI useful Kubeflow Pipelines will enable organizations to build and package ML resources so that they’re as useful as possible to the broadest range of internal users. This new component of Kubeflow, packages ML code just like building an app so that it’s reusable to other users across an organization. It enables industries to: Compose, deploy and manage reusable end-to-end machine learning workflows Enables rapid and reliable experimentation, so users can try many ML techniques to identify what works best for their application. Kubeflow Pipelines will help users take advantage of Google’s TensorFlow Extended (TFX) open source libraries to address production ML issues such as model analysis, data validation, training-serving skew, data drift, and more. Google has also released three features in Cloud Video API (in beta) that address common challenges for businesses that work extensively with video. Videos will now be more readily searchable since text detection can now determine where and when text appears in a video. It supports more than 50 languages. Object Tracking can identify more than 500 classes of objects in a video. Speech Transcription for Video can transcribe audio. This will make it possible to easily create captioning and subtitles, as well as increasing the searchability of its contents. #3 Cloud TPU updates to make AI faster Google’s Tensor Processing Units (TPUs) are custom ASIC chips designed for machine learning workloads to dramatically accelerate ML tasks, and are easily accessed through the cloud. Since July, Google has been adding features to their Cloud TPU to make compute-intensive machine learning faster and more accessible to businesses worldwide. In response to these upgrades, Kaustubh Das, vice president, data center product management at Cisco stated, “ Cisco is also delighted to see the emergence of Kubeflow Pipeline that promises a radical simplification of ML workflows which are critical for mainstream adoption. We look forward to bringing the benefits of this technology alongside our world class AI/ML product portfolio to our customers.” Adding to this line of thoughts were NVIDIA and Intel as well. Head over to Google’s official blog for an entire coverage of this announcement. #GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment Google open sources BERT, an NLP pre-training technique Google AdaNet, a TensorFlow-based AutoML framework  
Read more
  • 0
  • 0
  • 9503

article-image-apache-spark-2-4-0-released
Amrata Joshi
09 Nov 2018
2 min read
Save for later

Apache Spark 2.4.0 released

Amrata Joshi
09 Nov 2018
2 min read
Last week, Apache Spark released its latest version, Apache Spark 2.4.0. It is the fifth release in the 2.x line. This release comes with Barrier Execution Mode for better integration with deep learning frameworks. Apache Spark 2.4.0 brings 30+ built-in and higher-order functions to deal with complex data types. These functions work with  Scala 2.12 and improve the K8s (Kubernetes) integration. This release also focuses on usability, stability, and polish while resolving around 1100 tickets. What’s new in Apache Spark 2.4.0? Built-in Avro data source Image data source Flexible streaming sinks Elimination of the 2GB block size limitation during transfer Pandas UDF improvements Major changes Apache Spark 2.4.0 supports Barrier Execution Mode in the scheduler, for better integration with deep learning frameworks. One can now build Spark with Scala 2.12 and write Spark applications in Scala 2.12. Apache Spark 2.4.0 supports Spark-Avro package with logical type support for better performance and usability. Some users are SQL experts but aren’t much aware of Scala/Python or R. Thus, this version of Apache comes with support for Pivot. Apache Spark 2.4.0 has added Structured Streaming ForeachWriter for Python. This lets users write ForeachWriter code in Python, that is, they can use the partitionId and the version/batchId/epochId to conditionally process rows. This new release has also introduced Spark data source for the image format. Users can now load images through the Spark source reader interface. Bug fixes: The LookupFunctions are used to check the same function name again and again. This version includes a latest LookupFunctions rule which performs a check for each invocation. A PageRank change in the Apache Spark 2.3 introduced a bug in the ParallelPersonalizedPageRank implementation. This change prevents serialization of a Map which needs to be broadcast to all workers. This issue has been resolved with the release of Apache Spark 2.4.0 Read more about Apache Spark 2.4.0 on the official website of Apache Spark. Building Recommendation System with Scala and Apache Spark [Tutorial] Apache Spark 2.3 now has native Kubernetes support! Implementing Apache Spark K-Means Clustering method on digital breath test data for road safety
Read more
  • 0
  • 0
  • 21728

article-image-github-now-allows-repository-owners-to-delete-an-issue-curse-or-a-boon
Amrata Joshi
09 Nov 2018
3 min read
Save for later

Github now allows repository owners to delete an issue: curse or a boon?

Amrata Joshi
09 Nov 2018
3 min read
On Saturday Github released the public beta version for a new feature to delete issues. This feature lets repository admins, delete an issue from any repository, permanently. This might give more power to the repository owners now. Since the time Github tweeted about this news, the controversy around this feature seems to be on fire. According to many, this new feature might lead to the removal of issues that disclose severe security issues. Also, many users can take help of the closed issue and resolve their problems as the conversation history of repository sometimes has a lot of information. https://twitter.com/thegreenhouseio/status/1060257920158498817 https://twitter.com/aureliari/status/1060279790706589710 In case, someone posts a security vulnerability publicly as an issue, it might turn out to be a big problem to the project owner, as there’s a high possibility of people avoiding the future updates coming on the same project. This feature could be helpful to many organizations, as this feature might work as a damage control for them. Few of the issues posted by users on Github aren’t really issues, so this feature might be helpful in that direction. Also, there are a lot of duplicate issues which get posted on purpose or mistakenly by the users, so this feature could work a rescue tool! In contrast to this, a lot of users are opposing this feature. This feature might not be so helpful because no matter how fast one erases a vulnerability report, the info gets leaked via the mail inbox. The poll posted by one of the users on Twitter which has 71 votes as of the time of writing, shows that 69% of the participants disliked this feature. While only 14% of users have given a thumbs up to this feature. And the rest 17% have no views on it. The poll is still on, it would be interesting to see the final report of the same. https://twitter.com/d4nyll/status/1060422721589325824 The users are requesting for a better option which might just highlight a way to report security issues in a non-public way. While few others prefer an archive option instead of deleting the issue permanently. And some others just strongly favor removing the feature. https://twitter.com/kirilldanshin/status/1060265945598492677 With many users now blaming Microsoft for this feature on Github, it would be interesting to see the next update on the same feature, could it possibly just be an UNDO option? Read more about this news on Github’s official Twitter page. GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation GitHub now allows issue transfer between repositories; a public beta version GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage
Read more
  • 0
  • 0
  • 12500

article-image-scylladb-announces-scylla-3-0-a-nosql-database-surpassing-apache-cassandra-in-features
Prasad Ramesh
09 Nov 2018
2 min read
Save for later

ScyllaDB announces Scylla 3.0, a NoSQL database surpassing Apache Cassandra in features

Prasad Ramesh
09 Nov 2018
2 min read
ScyllaDB announced Scylla 3.0, a NoSQL database at the Scylla Summit 2018 this week. Scylla is written in C++ and now has 10x the throughput of Apache Cassandra. New features in Scylla 3.0 This release is a milestone for Scylla as it surpasses Apache Cassandra in features. Concurrent OLTP and OLAP support Scylla 3.0 enables its users to safely balance real-time operational workloads with big data analytical workloads all within a single database cluster. Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) have very different approaches to access data. OLTP encompasses many small and varied transactions. This includes mixed writes, updates, and reads which have a high sensitivity to latency. OLAP highlights on the throughput of broad scans spanning across datasets. With the addition of capabilities that isolate workloads, Scylla uniquely supports simultaneous OLTP and OLAP workloads while maintaining low latency and high throughput. Materialized views are production-ready Materialized views was an experimental feature for a long time in Scylla. It is now included in the production-ready versions. Materialized views are designed to enable automatic server-side table denormalization. One thing to note is that the Apache Cassandra community reverted materialized views from production-ready Cassandra to an experimental feature in 2017. Secondary indexes This is another feature that is now production-ready with the Scylla 3.0 release. These global secondary indexes can scale to any clusters of any size. This is unlike the local-indexing approach adopted by Apache Cassandra. Secondary indexes allow users to query data via non-primary key columns. Cassandra 3.x file format compatibility Scylla 3.0 includes support for Apache Cassandra 3.x compatible format (SSTable). This improves performance and reduces storage volume by three times. With a shared-nothing approach, Scylla has increased throughput and storage capacity 10x that of Apache Cassandra. Scylla Open Source 3.0 has a close-to-the-hardware design to use modern servers optimally. It is written from scratch in C++ for significant improvements in areas concerning throughput and latency. Scylla consistently achieves 99% tail latency of less than 1 millisecond. To know more about Scylla, visit the ScyllaDB website. Why MongoDB is the most popular NoSQL database today TimescaleDB 1.0 officially released PostgreSQL 11 is here with improved partitioning performance, query parallelism, and JIT compilation
Read more
  • 0
  • 0
  • 13071
article-image-facebooks-graphql-moved-to-a-new-graphql-foundation-backed-by-the-linux-foundation
Bhagyashree R
09 Nov 2018
3 min read
Save for later

Facebook’s GraphQL moved to a new GraphQL Foundation, backed by The Linux Foundation

Bhagyashree R
09 Nov 2018
3 min read
On Tuesday, The Linux Foundation announced that Facebook’s GraphQL project has been moved to a newly-established GraphQL Foundation, which will be hosted by the non-profit Linux Foundation. This foundation will be dedicated to enable widespread adoption and help accelerate the development of GraphQL and the surrounding ecosystem. GraphQL was developed by Facebook in 2012 and was later open-sourced in 2015. It has been adopted by many companies in production including Airbnb, Atlassian, Audi, CNBC, GitHub, Major League Soccer, Netflix, Shopify, The New York Times, Twitter, Pinterest, and Yelp. Why GraphhQL Foundation has been created? The foundation will provide a neutral home for the community to collaborate and encourage more participation and contribution. The community will be able to spread responsibilities and costs for infrastructure which will help in increasing the overall investment. This neutral governance will also ensure equal treatment in the community. The co-creator of GraphQL, Lee Byron said: “As one of GraphQL’s co-creators, I’ve been amazed and proud to see it grow in adoption since its open sourcing. Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support.” The foundation will also provide more resources for the GraphQL community which will benefit all contributors. It will help in organizing events and working groups, formalizing governance structures, providing marketing support to the project, and handling IP and other legal issues as they arise. The Executive Director of The Linux Foundation, Jim Zemlin believes that this new foundation will ensure the long-term support for GraphQL: “We are thrilled to welcome the GraphQL Foundation into the Linux Foundation. This advancement is important because it allows for long-term support and accelerated growth of this essential and groundbreaking technology that is changing the approach to API design for cloud-connected applications in any language.” In the next few months, The Linux Foundation with Facebook and the GraphQL community will be finalizing the founding members of the GraphQL Foundation. Read the full announcement on The Linux Foundation’s website and also check out the GraphQL Foundation’s website. Introducing Apollo GraphQL Platform for product engineering teams of all sizes to do GraphQL right 7 reasons to choose GraphQL APIs over REST for building your APIs Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’
Read more
  • 0
  • 0
  • 13595

article-image-black-swan-fame-darren-aronofsky-on-how-technologies-like-virtual-reality-and-artificial-intelligence-are-changing-storytelling
Bhagyashree R
08 Nov 2018
5 min read
Save for later

‘Black Swan’ fame, Darren Aronofsky, on how technologies like virtual reality and artificial intelligence are changing storytelling

Bhagyashree R
08 Nov 2018
5 min read
On Monday, at the Web Summit 2018, Darren Aronofsky in his interview with WIRED correspondent Lauren Goode, spoke about how virtual reality and artificial intelligence is giving filmmakers and writers the freedom of being more imaginative and enabling them to shape up their vision into reality. He is the director of many successful movies including Requiem for a Dream, The Wrestler, and Black Swan and one of his recent projects is based on VR called Spheres. It is a three-part virtual reality black hole series written by Eliza McNitt and produced by Darren Aronofsky's Protozoa Pictures. Aronofsky believes that combining storytelling and VR provides viewers a true emotional experience by taking them to a very convincing and different world. Here are some of the highlights from his interview: How virtual reality based storytelling is different from filmmaking? From a very long time people have been talking about VR replacing films, but it is not going to happen anytime soon. “It may replace how people decide to spend their time but they are two different art forms and most people who work in virtual reality and filmmaking are aware that trying to blend them will not work,” said Aronofsky. Aronofsky feels the experiences created by VR and films are very different. When you are watching a movie you not only watch the character but you also feel how the character is feeling because of empathy. Aronofsky remarks this is a great thing about filmmaking,“It is a great part of filmmaking that you can sit there and you can through close-up enter the subjective experience of the character who takes you on a journey where you are basically experiencing what the character is going through.” In virtual reality, on the other hand, very less character is involved. It is very experiential and instead of being transferred into another person's shoes you are much more yourself. How technology is affecting filmmaking in a better way? One of the biggest breakthroughs enabled by these technologies, according to Aronofsky, is allowing filmmakers to shape their ideas into exactly how they want. He points out that unlike the 70s and 80s, when there were only few “Gandalfs” like Spielberg and George Lucas who were using computers for creating experiences, now computers can be used by anybody to create amazing visual effects, animations, and much more. “Use of computers have unlocked the possibilities of what we can do and what type of stories we can tell,” he added. Technologies such as AI and VR has enabled filmmakers and writers to write and create extremely complicated sequences that otherwise would have taken several human hours. He says, “Machines has given many more ways of looking at the material.” Is there any dark side of using these technologies? Though technology is providing different ways of telling stories, there can be situations where its influence is too much. Aronofsky remarked that there are some filmmakers who have lost control over the use of technology in their films, which has resulted into “visual effects extravaganza”. The huge teams working on these projects focus more on visual effects instead of the storytelling part of filmmaking. But at the same time, there are some filmmakers who know exactly where to draw the line between virtual and reality, giving their audiences beautiful movies to enjoy. “But there are filmmakers like James Cameron who are in control of everything and creating a vision where every single shot is chosen in if it is in virtual setting or in a real setting”, says the moviemaker. On the question of whether AI could replace humans in filmmaking or storytelling, he feels that current technologies are not mature enough to be able to actually understand what the character is feeling. He says, “It’s a terrifying thought… When jokes and humor and stories start to be able to reproduced where you can’t tell the difference between them and the human counterparts is a strange moment… Storytelling is a tricky thing and I am going to be a bit of a Luddite now and put my faith in the constant invention of individuals to do something that a computer won’t.” Does data influences a filmmaker’s decisions? Nowadays every decision is data-driven. Online streaming services tracks each click and swipe to understand user preferences. But, Aronofsky believes that you cannot predict the future even if you have access to so much data. Maybe the popularity of the actors or the locations can help but currently we do not have a fixed formula to predict how much success a film will see. Technologies like AI and VR are helping filmmakers to create visual effects, helping them in digital editing, and all in all have enabled them to put no limits on their imagination. Watch Darren Aronofsky's full talk at Web Summit 2018: https://youtu.be/lkzNZKCxMKc Tim Berners-Lee is on a mission to save the web he invented Web Summit 2018: day 2 highlights UN on Web Summit 2018: How we can create a safe and beneficial digital future for all
Read more
  • 0
  • 0
  • 2515

article-image-nativescript-5-0-released-with-code-sharing-hot-module-replacement-and-more
Savia Lobo
08 Nov 2018
2 min read
Save for later

NativeScript 5.0 released with code sharing, hot module replacement, and more!

Savia Lobo
08 Nov 2018
2 min read
Yesterday, Progress, announced the release of NativeScript 5.0. With this release, NativeScript framework hits a major milestone of 3.5 million downloads in under three years, since its original launch in 2015. NativeScript 5.0 has capabilities including code sharing, hot module replacement, preview enhancements and more. Brad Green, Engineering Director, Angular Framework, Google Inc, said “In 2016, NativeScript 2.0 gave developers the ability to use Angular and NativeScript to create native mobile apps, but we wanted to take it a step further by enabling them to keep their code in one place and share business logic between web, iOS, and Android. We've worked closely with the NativeScript team resulting in the NativeScript-Schematics that integrates with the Angular CLI, making the user experience for developing both web and mobile within a single project completely seamless.” What’s New in NativeScript 5.0 NativeScript-Schematics This is a joint initiative with Angular to create a schematic that enables the building of both web and mobile apps from a single project. New ‘Instant Start’ CLI workflow Using this new CLI workflow users can start working on native apps within minutes, with no up-front requirements to install iOS and Android SDKs. Hot Module Replacement This includes instantaneous, stateful application updates, avoiding full page reload, significantly accelerating the app development and debugging experience. Vue.js support This includes a core Vue.js developer experience through a single file component in the NativeScript Playground. Streamlined “Getting Started” experience This enables an easier development with Playground-compatible code samples available in the NativeScript Marketplace. Android enhancements Some Android enhancements in this version include increased animation scenarios and more material design capabilities. Vector type support for iOS The Vector type support enables a broader set of augmented reality scenarios for iOS. Todd Anglin, Vice President, Product and Developer Relations, Progress, said, “Today’s release of NativeScript 5.0 brings an unprecedented level of ease, flexibility and productivity to the mobile app dev experience. Not only have we improved the ability to kickstart a project for new NativeScript users, but we’ve also expanded key attributes important to developers more versed in our offerings.” To know more about NativeScript 5.0 in detail, visit NativeScript’s official website. How to integrate Firebase with NativeScript for cross-platform app development NativeScript: What is it, and how to set it up Nativescript 4.1 has been released
Read more
  • 0
  • 0
  • 4499
article-image-introducing-apollo-graphql-platform-for-product-engineering-teams-of-all-sizes-to-do-graphql-right
Bhagyashree R
08 Nov 2018
3 min read
Save for later

Introducing Apollo GraphQL Platform for product engineering teams of all sizes to do GraphQL right

Bhagyashree R
08 Nov 2018
3 min read
Yesterday, Apollo introduced its Apollo GraphQL Platform for product engineering teams. It is built on Apollo's core open source GraphQL client and server and comes with additional open source devtools and cloud services. This platform is a combination of open source components, commercial extensions, and cloud services. The following diagram depicts its architecture: Source: Apollo GraphQL The Apollo GraphQL platform consists of the following components: Core open source components Apollo Server: It is a JavaScript GraphQL server used to define a schema and a set of resolvers that implement each part of that schema. It supports AWS Lambda and other serverless environments. Apollo Client: It is a GraphQL client that manages data and state in an application. It comes with integrations for React, React Native, Vue, Angular, and other view layers. iOS and Android clients: These clients allows to query a GraphQL API from native iOS and Android applications. Apollo CLI: It is a command line client that provides access to Apollo cloud services. Cloud services Schema registry: It is a central registry that acts as a central source of truth for a schema. It propagates all changes and details of your data,allowing multiple teams to collaborate with full visibility and security on a single data graph. Client registry: It is a registry that enables you to track each known consumer of a schema, which can include both pre-registered and ad-hoc clients. Operation registry: It is a registry of all the known operations against the schema, which similarly can include both pre-registered and ad-hoc operations. Trace warehouse: It is a data pipeline and storage layer that captures structured information about each GraphQL operation processed by an Apollo Server. Apollo Gateway GraphQL gateway is the commercial plugin for Apollo Server. It allows multiple teams to collaborate on a single, organization-wide schema without mixing everyone’s code together in a monolithic single point of failure. To do that, the gateway deploys “micro-schemas” that reference each other into a single master schema. This master schema then looks to a client just like any regular GraphQL schema. Workflows In addition to these components, Apollo also implements some useful workflows for managing a GraphQL API. Some of these workflows are: Schema change validation: It checks the compatibility of a given schema against a set of previously-observed operations using the trace warehouse, operation registry, and (typically) the client registry. Safelisting: Apollo provides an end-to-end mechanism for safelisting known clients and queries, a recommended best practice that limits production use of a GraphQL API to specific pre-arranged operations. To read the full announcement check out Apollo’s official announcement. Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’ 7 reasons to choose GraphQL APIs over REST for building your APIs Baidu open sources ApolloScape and collaborates with Berkeley DeepDrive to further machine learning in automotives
Read more
  • 0
  • 0
  • 13628

article-image-github-now-supports-the-gnu-general-public-license-gpl-cooperation-commitment-as-a-way-of-promoting-effective-software-regulation
Savia Lobo
08 Nov 2018
3 min read
Save for later

GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation

Savia Lobo
08 Nov 2018
3 min read
Yesterday, GitHub announced that it now supports the GPL Cooperation Commitment with 40 other software companies because it aligns with GitHub’s core values. According to the GitHub post, by supporting this change, GitHub “hopes that this commitment will improve fairness and certainty for users of key projects that the developer ecosystem relies on, including Git and the Linux kernel. More broadly, the GPL Cooperation Commitment provides an example of evolving software regulation to better align with social goals, which is urgently needed as developers and policymakers grapple with the opportunities and risks of the software revolution.” An effective regulation has an enforcement mechanism that encourages compliance. Here, the most severe penalties for non-compliance, such as shutting down a line of business, would be reserved for repeat and intentional violators. The other less serious failures to comply, or accidental non-compliance may only result in warnings following which the violation should be promptly corrected. GPL as a private software regulation The GNU General Public License (GPL) is a tool for a private regulator (copyright holder) to achieve a social goal. The goal can be explained as, “under the license, anyone who receives a covered program has the freedom to run, modify, and share that program.” However, if a developer wishes to regulate, the GPL version 2 has a bug from the perspective of an effective regulator. Due to this bug, “non-compliance results in termination of the license, with no provision for reinstatement. This further makes the license marginally more useful to copyright ‘trolls’ who want to force companies to pay rather than come into compliance.” The bug is fixed in the GPL version 3 by introducing a “cure provision” under which a violator can usually have their license reinstated—if the violation is promptly corrected. Git and the other developer communities including Linux kernel and others use GPLv2 since 1991; many of which are unlikely to ever switch to GPLv3, as this would require agreement from all copyright holders, and not everyone agrees with all of GPLv3’s changes. However, GPLv3’s cure provision is uncontroversial and can be backported to the extent GPLv2 copyright holders agree. This is how GPL Cooperation Commitment helps The GPL Cooperation Commitment is a way for a copyright holder to agree to extend GPLv3’s cure provision to all GPLv2 (also LGPLv2 and LGPLv2.1, which have the same bug) licenses offered. This allows violators a fair chance to come into compliance and have their licenses reinstated. This commitment also incorporates one of several principles (the others do not relate directly to license terms) for enforcing compliance with the GPL and other copyleft licenses as effective private regulation. To know more about GitHub’s support to the GPL Cooperation Commitment, visit its official blog post. GitHub now allows issue transfer between repositories; a public beta version GitHub updates developers and policymakers on EU copyright Directive at Brussels The LLVM project is ditching SVN for GitHub. The migration to Github has begun
Read more
  • 0
  • 0
  • 13848
Modal Close icon
Modal Close icon