Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-stanford-researchers-introduce-deepsolar-a-deep-learning-framework-that-mapped-every-solar-panel-in-the-us
Bhagyashree R
20 Dec 2018
3 min read
Save for later

Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US

Bhagyashree R
20 Dec 2018
3 min read
Yesterday, researchers from Stanford University introduced DeepSolar, a deep learning framework that analyzes satellite images to identify the GPS location and size of solar panels. Using this framework they have built a comprehensive database containing all the GPS locations and sizes of solar installations in the US. The system was able to identify 1.47 million individual solar installations across the United States, ranging from small rooftop configurations, solar farms, to utility-scale systems. The DeepSolar database is available publicly to aid researchers to extract further insights into solar adoption. This database will also help policymakers in better understanding the correlation between solar deployment and socioeconomic factors such as household income, population density, and education level. How DeepSolar works? DeepSolar uses transfer learning to train a CNN classifier on 366,467 images. These images are sampled from over 50 cities/towns across the US with merely image-level labels indicating the presence or absence of panels. One of the researchers, Rajagopal explained the model to Gizmodo, “The algorithm breaks satellite images into tiles. Each tile is processed by a deep neural net to produce a classification for each pixel in a tile. These classifications are combined together to detect if a system—or part of—is present in the tile.” The deep neural net then identifies which tile is a solar panel. Once the training is complete, the network produces an activation map, which is also known as a heat map. The heat map outlines the panels, which can be used to obtain the size of each solar panel system. Rajagopal further explained how this model gives better efficiency, “A rooftop PV system typically corresponds to multiple pixels. Thus even if each pixel classification is not perfect, when combined you get a dramatically improved classification. We give higher weights to false negatives to prevent them.” What are some of the observations the researchers made? To measure its classification performance the researchers defined two metrics: utilize precision and recall. Utilize precision is the rate of correct decisions among all positive decisions and recall is the ratio of correct decisions among all positive samples. DeepSolar was able to achieve a precision of 93.1% with a recall of 88.5% in residential areas and a precision of 93.7% with a recall of 90.5% in non-residential areas. To measure its size estimation performance they calculated the mean relative error (MRE). It was recorded to be 3.0% for residential areas and 2.1% for non-residential areas for DeepSolar. Future work Currently, the DeepSolar database only covers the contiguous US region. The researchers are planning to expand its coverage to include all of North America, including remote areas with utility-scale solar, and non-contiguous US states. Ultimately, it will also cover other countries and regions of the world. Also, DeepSolar only estimates the horizontal projection areas of solar panels from satellite imagery. In the future, it would be able to infer high-resolution roof orientation and tilt information from street view images. This will give a more accurate estimation of solar system size and solar power generation capacity. To know more in detail, check out the research paper published by Ram Rajagopal et al: DeepSolar: A Machine Learning Framework to Efficiently Construct a Solar Deployment Database in the United States. Introducing remove.bg, a deep learning based tool that automatically removes the background of any person based image within 5 seconds NeurIPS 2018: How machine learning experts can work with policymakers to make good tech decisions [Invited Talk] NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase
Read more
  • 0
  • 0
  • 17064

article-image-tensorflow-2-0-to-be-released-soon-with-eager-execution-removal-of-redundant-apis-tf-function-and-more
Amrata Joshi
15 Jan 2019
3 min read
Save for later

TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more

Amrata Joshi
15 Jan 2019
3 min read
Just two months ago Google’s TensorFlow, one of the most popular machine learning platforms celebrated its third birthday. Last year in August, Martin Wicke, engineer at Google, posted the list of what’s expected in TensorFlow 2.0, an open source machine learning framework, on the Google group. The key features listed by him include: This release will come with eager execution. This release will feature more platforms and languages along with improved compatibility. The deprecated APIs will be removed. Duplications will be reduced. https://twitter.com/aureliengeron/status/1030091835098771457 The early preview of TensorFlow 2.0 is expected soon. TensorFlow 2.0 is expected to come with high-level APIs, robust model deployment, powerful experimentation for research and simplified API. Easy model building with Keras This release will come with Keras, a user-friendly API standard for machine learning which will be used for building and training the models. As Keras provides various model-building APIs including sequential, functional, and subclassing, it becomes easier for users to choose the right level of abstraction for their project. Eager execution and tf.function TensorFlow 2.0 will also feature eager execution, which will be used for immediate iteration and debugging. The tf.function will easily translate the Python programs into TensorFlow graphs. The performance optimizations will remain optimum and by adding the flexibility, tf.function will ease the use of expressing programs in simple Python. Further, the tf.data will be used for building scalable input pipelines. Transfer learning with TensorFlow Hub The team at TensorFlow has made it much easier for those who are not into building a model from scratch. Users will soon get a chance to use models from TensorFlow Hub, a library for reusable parts of machine learning models to train a Keras or Estimator model. API Cleanup Many APIs are removed in this release, some of which are tf.app, tf.flags, and tf.logging. The main tf.* namespace will be cleaned by moving lesser used functions into sub packages such as tf.math. Few APIs have been replaced with their 2.0 equivalents like tf.keras.metrics, tf.summary, and tf.keras.optimizers. The v2 upgrade script can be used to automatically apply these renames. Major Improvements The queue runners will be removed in this release The graph collections will also get removed. The APIs will be renamed in this release for better usability. For example,  name_scope can be accessed using  tf.name_scope or tf.keras.backend.name_scope. For ease in migration to TensorFlow 2.0, the team at TensorFlow will provide a conversion tool for updating TensorFlow 1.x Python code for using TensorFlow 2.0 compatible APIs. It will flag the cases where code cannot be converted automatically. In this release, the stored GraphDefs or SavedModels will be backward compatible. With this release, the distribution to tf.contrib will no more be in use. Some of the existing contrib modules will be integrated into the core project or will be moved to a separate repository, rest of them will be removed. To know about this news, check out the post by the TensorFlow team on Medium. Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ] Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs
Read more
  • 0
  • 0
  • 17060

article-image-the-linux-and-risc-v-foundations-team-up-to-drive-open-source-development-and-adoption-of-risc-v-instruction-set-architecture-isa
Bhagyashree R
29 Nov 2018
3 min read
Save for later

The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA)

Bhagyashree R
29 Nov 2018
3 min read
Yesterday, the Linux Foundation announced that they are joining hands with the RISC-V Foundation to drive the open source development and adoption of the RISC-V instruction set architecture (ISA). https://twitter.com/risc_v/status/1067553703685750785 The RISC-V Foundation is a non-profit corporation, which is responsible for directing the future development of the RISC-V ISA. Since its formation, the RISC-V Foundation has quickly grown and now includes more than 100 member organizations. With this collaboration, the foundations aim to further grow this RISC-V ecosystem and provide improved support for the development of new applications and architectures across all computing platforms. Rick O’Connor, the executive director of the RISC-V Foundation, said, “With the rapid international adoption of the RISC-V ISA, we need increased scale and resources to support the explosive growth of the RISC-V ecosystem. The Linux Foundation is an ideal partner given the open source nature of both organizations. This joint collaboration with the Linux Foundation will enable the RISC-V Foundation to offer more robust support and educational tools for the active RISC-V community, and enable operating systems, hardware implementations and development tools to scale faster.” The Linux Foundation will provide governance, best practices for open source development, and resources such as training programs and infrastructure tools. Along with this, they will also help RISC-V in community outreach, marketing, and legal expertise. Jim Zemlin, the executive director at the Linux Foundation believes that RISC-V has great potential seeing its popularity in areas like AI, machine learning, IoT, and more. He said, “RISC-V has great traction in a number of markets with applications for AI, machine learning, IoT, augmented reality, cloud, data centers, semiconductors, networking and more. RISC-V is a technology that has the potential to greatly advance open hardware architecture. We look forward to collaborating with the RISC-V Foundation to advance RISC-V ISA adoption and build a strong ecosystem globally.” The two foundations have already started working on a pair of getting started guides for running Zephyr, a small, scalable open source real-time operating system (RTOS) optimized for resource-constrained devices. They are also conducting RISC-V Summit, a 4-day event starting from December 3-6 in Santa Clara. This summit will include sessions on RISC-V ISA architecture, commercial and open-source implementations, software and silicon, vectors and security, applications and accelerators, and much more. Read the complete announcement on the Linux Foundation’s official website. Uber becomes a Gold member of the Linux Foundation The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project Google becomes new platinum member of the Linux foundation
Read more
  • 0
  • 0
  • 17053

article-image-thanks-deepcode-ai-can-help-you-write-cleaner-code
Richard Gall
30 Apr 2018
2 min read
Save for later

Thanks to DeepCode, AI can help you write cleaner code

Richard Gall
30 Apr 2018
2 min read
DeepCode is a tool that uses artificial intelligence to help software engineers write cleaner code. It's a bit like Grammarly or the Hemingway Editor, but for code. It works in an ingenious way. Using AI, it reads your GitHub repositories and highlights anything that might be broken or cause compatibility issues. It is currently only available for Java, JavaScript, and Python, but more languages are going to be added. DeepCode is more than a debugger Sure, DeepCode might sound a little like a glorified debugger. But it's important to understand it's much more than that. It doesn't just correct errors, it can actually help you to improve the code you write. That means the project's mission isn't just code that works, but code that works better. It's thanks to AI that DeepCode is able to support code performance too - the software learns 'rules' about how code works best. And because DeepCode is an AI system, it's only going to get better as it learns more. Speaking to TechCrunch, Boris Paskalev claimed that DeepCode has more than 250,000 rules. This is "growing daily." Paskalev went on to explain: "We built a platform that understands the intent of the code... We autonomously understand millions of repositories and note the changes developers are making. Then we train our AI engine with those changes and can provide unique suggestions to every single line of code analyzed by our platform.” DeepCode is a compelling prospect for developers. As applications become more complex, and efficiency becomes increasingly more important, a simple solution to unlocking greater performance could be invaluable. It's no surprise that it has already raised 1.1 milion in investment from VC company btov. It's only going to become more popular with investors as the popularity of the platform grows. This might mean the end of spaghetti code, which can only be a good thing. Find out more about DeepCode and it's pricing here. Read more: Active Learning: An approach to training machine learning models efficiently
Read more
  • 0
  • 0
  • 17048

article-image-introducing-kweb-a-kotlin-library-for-building-rich-web-applications
Bhagyashree R
10 Dec 2018
2 min read
Save for later

Introducing Kweb: A Kotlin library for building rich web applications

Bhagyashree R
10 Dec 2018
2 min read
Kweb is a library using which you can easily build web applications in the Kotlin programming language. It basically eliminates the separation between browser and server from the programmer’s perspective. This means that events that only manipulate the DOM don't need to do a server-roundtrip. As Kweb is written in Kotlin, users should have some familiarity with the Kotlin and Java ecosystem. Kweb allows you to keep all of the business logic in the server-side and enables the communication with the web browser through efficient websockets. To efficiently handle asynchronicity, it takes advantage of Kotlin’s powerful new coroutines mechanism. It also allows keeping consistent state across client and server by seamlessly conveying events between both. What are the features of Kweb? Makes the barrier between the web server and web browser mostly invisible to the programmer. Minimizes the server-browser chatter and browser rendering overhead. Supports integration with some powerful JavaScript libraries like Semantic, which is a UI framework designed for theming. Allows binding DOM elements in the browser directly to state on the server and automatically update them through the observer and data mapper patterns. Seamlessly integrates with Shoebox, a Kotlin library for persistent data storage that supports views and the observer pattern. Easily add to an existing project. Instantly update your web browser in response to code changes. The Kweb library is distributed via JitPack, a novel package repository for JVM and Android projects. Kweb takes advantage of the fact that in most web apps, logic occurs in the server side and the client can’t be trusted. This library is in its infancy but still works well enough to demonstrate that the approach is practical. You can read more about Kweb on its official website. Kotlin based framework, Ktor 1.0, released with features like sessions, metrics, call logging and more Kotlin 1.3 released with stable coroutines, multiplatform projects and more KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta
Read more
  • 0
  • 0
  • 17041

article-image-ibm-halt-sales-of-watson-ai-tool-for-drug-discovery-amid-tepid-growth-stat-report
Fatema Patrawala
19 Apr 2019
3 min read
Save for later

IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report

Fatema Patrawala
19 Apr 2019
3 min read
STAT reported yesterday that IBM is halting the sales of their “Watson for Drug Discovery” machine learning/AI tool, according to sources within the company. According to STAT report, IBM is giving up its efforts to develop and flog its Drug Discovery technology due to “sluggish sales,”. But no one seems to have told IBM’s website programming team, because the pages of the product information are still up on the IBM website. They’re worth taking a look as to how the product has been over-promised by IBM. Apparently, IBM Watson Health uses AI software to help companies reveal the connection and relationship among genes, drugs, diseases, and other entities by analyzing multiple sets of life sciences knowledge. But according to the IEEE Spectrum report, IBM’s entire foray into health care has been marked by the familiar combination of overpromising and under-delivery. However, the service isn’t completely shutting down. IBM spokesperson Ed Barbini told to The Register: “We are not discontinuing our Watson for Drug Discovery offering, and we remain committed to its continued success for our clients currently using the technology. We are focusing our resources within Watson Health to double down on the adjacent field of clinical development where we see an even greater market need for our data and AI capabilities.” In other words, it appears the product won’t be sold to any new customers, however, organizations that want to continue using the system will still be supported. “The offering is staying on the market, and we'll work with clients who want to team with IBM in this area. But our future efforts will be more focused on clinical trials – it's a much bigger market and better use of our technology and tools.”, according to IBM The Drug Discovery service is made up of lots of different products or "modules," such as a search engine that allows chemists to crawl scientific abstracts to find information on a specific gene or chemical compound. There’s also a knowledge network that describes relationships between drugs and diseases. IBM’s Health division has been crumbling for a while. IBM Watson Health’s Oncology AI software dished out incorrect and unsafe recommendations during beta testing. And to add to their worry, in October last year Deborah DiSanzo, IBM’s head of Watson Health, stepped down from her position too. IBM CEO, Ginni Rometty, on bringing HR evolution with AI and its predictive attrition AI IBM announces the launch of Blockchain World Wire, a global blockchain network for cross-border payments Diversity in Faces: IBM Research’s new dataset to help build facial recognition systems that are fair
Read more
  • 0
  • 0
  • 17038
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-us-regulators-plan-to-probe-google-on-anti-trust-issues-facebook-amazon-apple-also-under-legal-scrutiny
Fatema Patrawala
06 Jun 2019
4 min read
Save for later

US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny

Fatema Patrawala
06 Jun 2019
4 min read
According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. On Monday news reported by Yahoo Finance suggests that the Justice Department's Antitrust Division and the Federal Trade Commission (FTC) met in recent weeks. They both have agreed to give the Justice Department the jurisdiction to undertake potential antitrust probes of major tech giants like Facebook, Amazon, Apple and Google. The supposedly imminent US investigation is the latest sign of the increasing scrutiny facing the world’s major tech businesses. Other reports cited that the e-commerce giant Amazon could also face the heightened antitrust scrutiny under a new agreement between U.S. regulators and the FTC. Nancy Pelosi, Speaker of the House, tweeted yesterday that, “the US regulators will begin long overdue investigation to determine if digital platforms have harmed Americans. Unwarranted, concentrated economic power in the hands of a few is dangerous to democracy – especially when digital platforms control content. The era of self-regulation is over.” https://twitter.com/SpeakerPelosi/status/1135698760397393921 The news of the potential investigation for Google and Facebook was first broken by The Wall Street Journal last Friday. As per a CNN report the scope of DOJ's interest is unclear. Agency regulators led by antitrust chief Makan Delrahim say their focus is on Google's search business and Google's advertising practices. This investigation would pose the most serious challenge to the tech industry, as political leaders like Elizabeth Warren have accused Silicon Valley titans of strangling competition and have also demanded the tech giants to be broken up. These calls have been spurred on by an endless string of privacy mishaps, misinformation scandals and the proliferation of graphic and hateful content on the tech platforms. Most recent example being of the Christchurch terrorist attack in New Zealand which was enabled and amplified by social media. This fresh investigation of Google by US regulators could also reopen older probes from 2013 when the company was under FTC investigation. But it emerged relatively unscathed as Google pledged to change certain aspects of its business, such as how it handles content from third-party travel or shopping sites etc. The FTC then closed the investigation. EU regulators have also taken Google under similar scrutiny when this year in March, the company faced a €1.5bn fine from EU regulators over antitrust violations. The fine was linked to its AdSense service, with the EU ruling that the terms it forced on companies were unfair. They included preventing rivals from appearing in online search ads from “exclusivity contracts” with publishers. The €1.5bn fine was the third that the EU commission had made against Google in the past two years. In 2018, it slapped the firm with a €4.3bn fine relating to its Android mobile operating system that was used to unfairly undercut rivals in the mobile phone market and a €2.4bn fine for promoting its own shopping service over rivals. CNN further reports that according to few analysts the likelihood of a new probe for Google is high. "If there were a clearance fight over a Google investigation — this is such a big matter that whoever won the clearance fight would almost certainly be opening an investigation," said Michael Kades, a former FTC antitrust attorney. “Still, just because DOJ is taking on responsibility for Google's oversight does not mean an investigation has been opened or that the agency is imminently poised to act against Google,'' said Gene Kimmelman, a former Justice Department antitrust official. "This is a warning sign for Google," he said. "It's quite clear the Department of Justice will be at least scrutinizing their behavior very carefully." Maureen Ohlhausen, a former acting chairman of the FTC, said the tech industry's fall from political grace has raised expectations for the nation's top competition enforcers — and for Delrahim to stake out new territory for DOJ." "With the continuing techlash from the right and the left, both antitrust agencies are under pressure to escalate their actions," she said. "The companies ought to be careful about how they behave at this moment," Kimmelman added. "But I also think there will be enormous scrutiny of what the enforcement officials do, and why they do it." As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing Silicon Valley investors envy China’s 996 work culture; but tech workers stand in solidarity with their Chinese counterparts GDPR complaint in EU claim billions of personal data leaked via online advertising bids
Read more
  • 0
  • 0
  • 17028

article-image-10-years-of-github
Richard Gall
12 Apr 2018
3 min read
Save for later

10 years of GitHub

Richard Gall
12 Apr 2018
3 min read
GitHub celebrated its tenth birthday on Tuesday. Since its launch in April 2008, the version control platform has come to define the way we build software. It's difficult to see open source software culture evolving to the extent it has without GitHub. Its impact can be felt even beyond software. It has changed the ways users experience the web, and it has made artificial intelligence more pervasive than ever. For that reason, we should all pay tribute to GitHub - developers and non-developers alike. You can find Packt on GitHub here. We have more than 1,400 code repositories for our products that you can use as you learn. 27 million developers have 'contributed' to GitHub's success GitHub now has more than 27 million developers on its platform and is home to more than 80 million projects. To say thank you to everyone who has been a part of the last 10 years - everyone who has quite literally contributed to its success - the team put together this short video: https://www.youtube.com/watch?v=hQXV70Z4cFI Key GitHub milestones Let's take a look at some of the most important milestones in GitHub's first decade. Find out more here. April 12, 2008 - GitHub officially launches. Read the post from 2008 to take a trip back in time... May 21, 2009 - Node.js launches and its saga begins. From io.js forking in 2014, to reunification a year later, the JavaScript library is today one of the most popular tools on GitHub. There are almost 2,000 contributors to Node.js Core, the central Node.js project. January 1, 2012 - JavaScript begins 2012 as the most popular language on GitHub. More than 6 years later that still remains the case. January 16, 2013 - GitHub reaches 3 million users. In the space of just 5 year,s the platform was truly embedded in the software landscape. October 23, 2014 - Microsoft takes the significant step of making .NET open source. This was the start of a cultural shift at Microsoft that's still happening today. Perhaps more than any other moment in the history of open source, this underlined the fact that it was no longer a niche stream of the software landscape. It had become part of the mainstream. March 2, 2015 - Unreal Engine 4 makes its source code available for free. This gave game developers access to an incredibly powerful tool for no cost whatsoever. The impact on the growth of game development is important to note. 'Games' was one of the most popular topics on the platform in 2017. December 3, 2015 - Mirroring the move made by Microsoft in 2014, Apple makes Swift open source. Again, this was a huge tech company - with a similar reputation for isolationism - embracing open source. February 15, 2017 - The launch of TensorFlow 1.0 marks the start of the boom in artificial intelligence. Or more specifically, it marks a point at which artificial intelligence and machine learning became accessible to millions more people than ever before. The range of projects that TensorFlow has been a part of is astonishing - from research to medicine to marketing, its accessibility has had an impact on the world in ways that many people don't realise. May 31, 2017 - GitHub's 100 millionth pull request is merged. Thank you, GitHub for 10 years supporting software developers. It really wouldn't have been the same without you. Here's to another 10 years... Is Comet the new Github for Artificial Intelligence? Mine Popular Trends on GitHub using Python – Part 1 Github’s plan for coding automation, TensorFlow releases Tensorflow Lattice – 11th Oct.’ 17 Headlines Collaboration Using the GitHub Workflow Using Gerrit with GitHub
Read more
  • 0
  • 0
  • 17027

article-image-red-hat-drops-mongodb-over-concerns-related-to-its-server-side-public-license-sspl
Natasha Mathur
17 Jan 2019
3 min read
Save for later

Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL)

Natasha Mathur
17 Jan 2019
3 min read
It was last year in October when MongoDB announced that it’s switching to Server Side Public License (SSPL). Now, the news of Red Hat removing MongoDB from its Red Hat Enterprise Linux and Fedora over its SSPL license has been gaining attention. Tom Callaway, University outreach Team lead, Red Hat, mentioned in a note, earlier this week, that Fedora does not consider MongoDB’s Server Side Public License v1 (SSPL) as a Free Software License. He further explained that SSPL is “intentionally crafted to be aggressively discriminatory towards a specific class of users. To consider the SSPL to be "Free" or "Open Source" causes that shadow to be cast across all other licenses in the FOSS ecosystem, even though none of them carry that risk”. The first instance of Red Hat removing MongoDB happened back in November 2018 when its RHEL 8.0 beta was released. RHEL 8.0 beta release notes explicitly mentioned that the reason behind the removal of MongoDB in RHEL 8.0 beta is because of SSPL. Apart from Red Hat, Debian also dropped MongoDB from its Debian archive last month due to similar concerns over MongoDB’s SSPL. “For clarity, we will not consider any other version of the SSPL beyond version one. The SSPL is clearly not in the spirit of the DFSG (Debian’s free software guidelines), let alone complimentary to the Debian's goals of promoting software or user freedom”, mentioned Chirs Lamb, Debian Project Leader. Also, Debian developer, Apollon Oikonomopoulos, mentioned that MongoDB 3.6 and 4.0 will be supported longer but that Debian will not be distributing any SSPL-licensed software. He also mentioned how keeping the last AGPL-licensed version (3.6.8 or 4.0.3) without the ability to “cherry-pick upstream fixes is not a viable option”. That being said, MongoDB 3.4 will be a part of Debian as long as it is AGPL-licensed (MongoDB’s previous license). MongoDB’s decision to move to SSPL license was due to cloud providers exploiting its open source code. SSPL clearly specifies an explicit condition that companies wanting to use, review, modify or redistribute MongoDB as a service, would have to open source the software that they’re using. This, in turn, led to a debate among the industry and the open source community, as they started to question whether MongoDB is open source anymore. https://twitter.com/mjasay/status/1082428001558482944 Also, MongoDB’s adoption SSPL forces companies to either go open source or choose MongoDB’s commercial products. “It seems clear that the intent of the license author is to cause Fear, Uncertainty, and Doubt towards commercial users of software under that license” mentioned Callaway. https://twitter.com/mjasay/status/1083853227286683649 MongoDB acquires mLab to transform the global cloud database market and scale MongoDB Atlas MongoDB Sharding: Sharding clusters and choosing the right shard key [Tutorial] MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more
Read more
  • 0
  • 0
  • 17026

article-image-mapr-dataops-platform-6-0
Sugandha Lahoti
22 Nov 2017
3 min read
Save for later

New MapR Platform 6.0 powers DataOps

Sugandha Lahoti
22 Nov 2017
3 min read
MapR Technologies Inc announced the release of a new version of their Converged Data Platform. The new MapR Platform 6.0 is focused on DataOps to increase the value of data by bringing together functions from across an enterprise. DataOps is an approach to improve quality and reduce life cycle time of data analytics for Big data applications. MapR Platform 6.0, offers the entire DataOps team (data scientists, data engineers, systems administrators, and cluster operators) in an organization, a unified management solution. Some top releases and features of the platform include: The MapR Control System (MCS), a new centralized control system that converges all data sources and types from multiple backends. It is built on the Spyglass Initiative and provides a unified management solution for the data stored in the MapR platform. This includes files, JSON-tables, and streaming data. MapR 6.0 MCS also comes in with: A quick glance cluster dashboard Resource utilization by node and by service Capacity planning using storage utilization trends and per-tenant usage Easy to set up replication, snapshots, and mirrors The ability to manage cluster events with related metrics and expert recommendations Direct access to default metrics and pre-filtered logs The power to manage MapR Streams and configure replicas Access to MapR DB tables, indexes, and change logs Intuitive mechanisms to set up volume, table, and stream ACEs for access control MapR Monitoring uses MapR Streams in the core architecture to build a customizable, scalable, and extensible monitoring framework. The MapR platform also includes the latest release of MapR-DB 6.0. It is a multi-model database built for data-intensive applications such as real-time streaming, operational workloads, and analytical applications. The MapR Data Science Refinery provides scalable data science tools to organizations to help them generate insights from their data and convert them into operational applications. It provides an access to all platform assets including app servers, web servers, and other client nodes and apps. The MapR Data Science Refinery also comes with 8 visualization libraries, including MatPlotLib and GGPlot2. In addition, Apache Spark connectors are provided for interacting with both MapR-DB and MapR-ES. MapR also includes a preconfigured Docker Container to use MapR as a data store. The Stateful Containers offer easy deployment solutions apart from being secure and extensible. Organizations can also create real-time pipelines for machine learning applications and apply ML models to real-time data by the native integration between MapR-ES and ML libraries. In addition, The MapR platform 6.0 also includes single-click security enhancements, cloud-scale multi-tenancy, and MapR volume metrics, available via an extensible volume dashboard in Grafana.   The MapR Platform 6.0 is available now. However, for Cloud providers such as Microsoft Azure, Amazon Web Services, and Oracle Cloud, the version 6.0 would be available before the end of this year. For more information about the product, you can visit the official documentation here.
Read more
  • 0
  • 0
  • 17016
article-image-how-to-resolve-user-error-in-kerberos-configuration-manager-from-blog-posts-sqlservercentral
Anonymous
26 Dec 2020
6 min read
Save for later

How To Resolve User Error in Kerberos Configuration Manager from Blog Posts - SQLServerCentral

Anonymous
26 Dec 2020
6 min read
A recurring theme in the world of SQL Server seems to be the battle with Kerberos, SPNs, and SSPI Context errors. It is common enough that many DBAs have gone bald along with their domain admin counterparts trying to fix the problems. Once upon a time, I contributed an article showing a decent tool that can help figure out some of the problems related to SPNs, SSPI errors, and Kerberos in general – with regards to SQL Server. The tool I mentioned in that article is called “Kerberos Configuration Manager” (KCM). Recently, I ran into an error with this tool that is a bit absurd and not very helpful at all. Given the usefulness of the error and absurdity of it, I thought to myself – why not share the problem and resolution in this, the first article in this year’s 12 Days of Christmas Series. What A Wretched Little Error On the surface this seems like a pretty chill error. It seems that there is useful information in the error screen that could help one go and figure out what to do next. Looking at the error text, it appears that I have an issue with permissions because I can’t access UserAccount information and the error text gives me a log to go check. Let’s break this down a little bit. This error pops up with a user that happens to be a member of the local admins group on the server in question. The user also happens to be a domain administrator. And <grimace>, this user is also a sysadmin in SQL Server. So, seemingly permissions should not be an issue, right? I know, I know. This is not an ideal security setup. It just so happens to be a point of interest currently being discussed and worked on with this client. The security setup will get better. That said, I would eliminate permissions as a variable, and therefor permissions would not be a cause in this error. Let’s take a look at the next given (sorry mathematical proof concept shining through there), aka “bit of information from the error text”. The error text tells me there is a log available and gives me the directory where it should exist, so it is time to look at that log. If I proceed to open that file and look at the contents, I frequently see something like this ( 4 for 4 with this particular client). Note here that the file is entirely empty. This is a problem! The place I am supposed to look to resolve this problem has nothing logged to the file. How can I possibly use that to troubleshoot? Well, keep reading. The Cause and How to Fix this Kerberos Config Mgr Error The error message is certainly misleading. Then again, maybe it isn’t. As it turns out, the cause of the message is due to the existence of a ghosted AD account in the local admins group. Here is an example of what I am talking about in the image to the right. The local admins group on each of the affected systems had at least one of these detached SIDs. These are accounts that basically don’t exist any longer in Active Directory. These accounts should be cleaned out on a regular basis and it is a fairly risk free process. Given this bit of insight, if you re-examine the error text, it now makes sense. There is an account for which the tool cannot gain access because the account does not truly exist – just some shards of it. To fix the error, just delete these SIDs from the Local Admins group and then run the KCM tool again. After the ghost SIDs are removed, then an interesting thing happens (besides the KCM working properly). When you open the log file again, you will see something different. Here is an example. Looking closer at the text of the log, this is the line of the greatest interest: Error: Access of system information failed System.DirectoryServices.AccountManagement.PrincipalOperationException: An error (1332) occurred while enumerating the group membership. The member’s SID could not be resolved. Clearly, if this message had populated before the problem was fixed, then I would have been able to fix the problem in a more direct path. This clearly states that there is a problem with an SID and that the SID could not be resolved. Why the tool needs to be able to resolve all SIDs escapes me, but it is what it is and we just roll with it for now. Put a bow on it This article showed a problem with one of my favorite tools – Kerberos Configuration Manager. This tool does provide a great deal of power in helping to resolve various SPN related problems with your SQL Server instances. Sadly, the error in this case is a bit of a pain to figure out because the log doesn’t populate properly when the error is thrown. Rather the log seems to populate after the error is resolved. The solution provided in this article is an easy fix and is consistent across multiple versions of Windows and SQL Server. Save yourself some headache up front, just delete those phantom SIDs from the local admin group on a routine basis. They shouldn’t be there anyway. Interested in more back to basics articles? Check these out! Want to learn more about your security? Try this article on some of the many problems with lazy security in your environment (yes akin to a user being a Domain Admin, Sysadmin, and local admin). Here is another fantastic article discussing some of the persistent issues I have seen across the board at numerous clients for years and years. And as a prelude to an upcoming article in the 12 Days of Christmas series, here is a refresher on a topic I wrote about just last month. This is the first article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post How To Resolve User Error in Kerberos Configuration Manager first appeared on SQL RNNR. Related Posts: The Gift of the SPN December 10, 2019 CRM Data Source Connection Error January 23, 2020 Collation Conflict with Extended Events March 12, 2020 Configuration Manager is Corrupt December 17, 2019 Ad Hoc Queries Disabled December 20, 2019 The post How To Resolve User Error in Kerberos Configuration Manager appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 17005

article-image-google-gives-artificial-intelligence-full-control-over-cooling-its-data-centers
Sugandha Lahoti
20 Aug 2018
2 min read
Save for later

Google gives Artificial Intelligence full control over cooling its data centers

Sugandha Lahoti
20 Aug 2018
2 min read
Google in collaboration with DeepMind is giving the control of cooling several of its data centers completely to an AI algorithm. Since 2016, they have been using an AI-powered recommendation system (developed by Google and DeepMind) to improve the energy efficiency of Google’s data centers. This system made recommendations to data center managers, leading to energy savings of around 40 percent in those cooling systems. Now, Google is completely handing the control over to cloud-based AI systems. https://twitter.com/mustafasuleymn/status/1030442412861218817 How Google’s safety-first AI system works Google’s previous AI engine required too much operator effort and supervision to implement the recommendations. So they explored a new system that could give similar energy savings without manual implementation. Here’s how the algorithm does it. A large number of sensors are embedded in the cooling center. The cloud-based AI system monitors the data centers and every five minutes pulls a snapshot of the data center. It then feeds this snapshot into deep neural networks, which predict how different combinations of potential actions will affect future energy consumption. The AI system then identifies which actions will minimize the energy consumption while satisfying safety constraints. Those actions are sent back to the data center, where the actions are verified by the local control system and then implemented. To ensure safety and reliability, the system uses eight different mechanisms to ensure it behaves as intended at all times and improve energy savings. The system is already delivering consistent energy savings of around 30 percent on average, with further expected improvements. Source: DeepMind Blog In the long term, Google wants to apply this technology in other industrial settings, and help tackle climate change on an even grander scale. You can read more about their Safety-first AI on DeepMind’s Blog. DeepMind Artificial Intelligence can spot over 50 sight-threatening eye diseases with expert accuracy. Why DeepMind made Sonnet open source. How Google’s DeepMind is creating images with artificial intelligence.
Read more
  • 0
  • 0
  • 17005

article-image-a-new-geometric-deep-learning-extension-library-for-pytorch-releases
Sunith Shetty
19 Jun 2018
2 min read
Save for later

A new geometric deep learning extension library for PyTorch releases!

Sunith Shetty
19 Jun 2018
2 min read
PyTorch Geometric is a new geometric deep learning extension library for PyTorch. With this library, you will be able to perform deep learning on graphs and other irregular graph structures using various methods and features offered by the library. Additionally, it also offers an easy-to-use mini-batch loader and helpful transforms to perform complex operations. In order to create your own simple interfaces, you can use a range of a large number of datasets offered by PyTorch Geometric library. You can use all these sets of features for performing operations on both arbitrary graphs as well as on 3D meshes or point clouds. You can find the following list of methods that are currently implemented in the library: SplineConv, Spline based CNNs which are used for irregular structured and geometric input (For eg: Graphs or meshes). You can refer to the research paper for more details. GCNConv provides a scalable approach using semi-supervised learning on graph-structured data. You can refer to the research paper for more details. ChebConv uses a generalized CNN model with fast localized spectral filtering on graphs. You can refer to the research paper for more details. NNConv uses a neural message passing algorithm for Quantum chemistry. You can refer to the research paper for more details. GATConv uses graph attention networks that operate on graph-structured data. You can refer to the research paper for more details. AGNNProp uses attention-based graph neural networks for graph-based semi-supervised learning. You can refer to the research paper for more details. SAGEConv uses representation learning on large graphs thus achieving great results in a variety of prediction tasks. You can refer to the research paper for more details. Graclus Pooling uses weighted graph cuts without Eigenvectors. You can refer to the research paper for more details. Voxel Grid Pooling In order to learn more about the installation, data handling mechanisms and a full list of implemented methods and datasets, you can refer the documentation. If you want simple hands-on examples to practice you can refer the examples/ directory. The library is currently in its first Alpha release. You can contribute to the project by raising an issue request if you notice anything unexpected. Read more Can a production ready Pytorch 1.0 give TensorFlow a tough time? Is Facebook-backed PyTorch better than Google’s TensorFlow? Python, Tensorflow, Excel and more – Data professionals reveal their top tools
Read more
  • 0
  • 0
  • 16996
article-image-a-security-issue-in-the-net-http-library-of-the-go-language-affects-all-versions-and-all-components-of-kubernetes
Savia Lobo
23 Aug 2019
3 min read
Save for later

A security issue in the net/http library of the Go language affects all versions and all components of Kubernetes

Savia Lobo
23 Aug 2019
3 min read
On August 19, the Kubernetes Community disclosed that a security issue has been found in the net/http library of the Go language affecting all versions and all components of Kubernetes. This can further result in a DoS attack against any process with an HTTP or HTTPS listener. The two high severity vulnerabilities, CVE-2019-9512 and CVE-2019-9514 have been assigned CVSS v3.0 base scores of 7.5 by the Kubernetes Product Security Committee. These vulnerabilities allow untrusted clients to allocate an unlimited amount of memory until the server crashes. The Kubernetes' development team has released patched versions to address these security flaws to further block potential attackers from exploiting them. CVE-2019-9512 Ping Flood In CVE-2019-9512, the attacker sends continual pings to an HTTP/2 peer, causing the peer to build an internal queue of responses. Depending on how efficiently this data is queued, this can consume excess CPU, memory, or both, potentially leading to a denial of service. CVE-2019-9514 Reset Flood In CVE-2019-9514, the attacker opens a number of streams and sends an invalid request over each stream that should solicit a stream of RST_STREAM frames from the peer. Depending on how the peer queues the RST_STREAM frames, this can consume excess memory, CPU, or both, potentially leading to a denial of service. The Go team announced versions go1.12.8 and go1.11.13, following which the Kubernetes developer team has released patch versions of Kubernetes built using the new versions of Go. Kubernetes v1.15.3 - go1.12.9 Kubernetes v1.14.6 - go1.12.9 Kubernetes v1.13.10 - go1.11.13 On August 13, Netflix announced the discovery of multiple vulnerabilities that can affect server implementations of the HTTP/2 protocol. The popular video streaming website issued eight CVEs in their security advisory and two of these also impact Go and all Kubernetes components designed to serve HTTP/2 traffic (including /healthz). The Azure Kubernetes Service community has recommended customers to upgrade to a patched release soon. “Customers running minor versions lower than the above (1.10, 1.11, 1.12) are also impacted and should also upgrade to one of the releases above to mitigate these CVEs”, the team suggests. To know more about this news in detail, read AKS Guidance and updates on GitHub. Security flaws in Boeing 787 CIS/MS code can be misused by hackers, security researcher says at Black Hat 2019 CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed Cybersecurity researcher "Elliot Alderson" talks Trump and Facebook, Google and Huawei, and teaching kids online privacy [Podcast]
Read more
  • 0
  • 0
  • 16989

article-image-haproxy-introduces-stick-tables-for-server-persistence-threat-detection-and-collecting-metrics
Bhagyashree R
24 Sep 2018
3 min read
Save for later

HAProxy shares how you can use stick tables for server persistence, threat detection, and collecting metrics

Bhagyashree R
24 Sep 2018
3 min read
Yesterday, HAProxy published an article discussing stick tables, an in-memory storage. Introduced in 2010, it allows you to track client activities across requests, enables server persistence, and collects real-time metrics. It is supported in both the HAProxy Community and Enterprise Edition. You can think of stick tables as a type of key-value store. The key here represents what you track across requests, such as a client IP, and the values are the counters that, for the most part, HAProxy takes care of calculating for you. What are the common use cases of stick tables? StackExchange realized that along with its core functionality, server persistence, stick tables can also be used for many other scenarios. They sponsored its developments and now stick tables have become an incredibly powerful subsystem within HAProxy. Stick tables can be used in many scenarios; however, its main uses include: Server persistence Stick tables were originally introduced to solve the problem of server persistence. HTTP requests are stateless by design because each request is executed independently, without any knowledge of the requests that were executed before it. These tables can be used to store a piece of information, such as an IP address, cookie, or range of bytes in the request body, and associate it with a server. Next time when HAProxy sees new connections using the same piece of information, it will forward the request on to the same server. This way it can help in tracking user activities between one request and add a mechanism for storing events and categorizing them by client IP or other keys. Bot detection We can use stick tables to defend against certain types of bot threats. It finds its application in preventing request floods, login brute force attacks, vulnerability scanners, web scrapers, slow loris attacks, and many more. Collecting metrics With stick tables, you can collect metrics to understand what is going on in HAProxy, without enabling logging and having to parse the logs. In this scenario Runtime API is used, which can read and analyze stick table data from the command line, a custom script or executable program. You can visualize this data using any dashboard of your choice. You can also use the fully-loaded dashboard, which comes with HAProxy Enterprise Edition for visualizing stick table data. These were a few of the use cases where stick tables can be used. To get a clear understanding of stick tables and how they are used, check out the post by HAProxy. Update: Earlier the article said, "Yesterday (September 2018), HAProxy announced that they are introducing stick tables." This was incorrect as pointed out by a reader, stick tables have been around since 2010. The article is now updated to reflect the same.    Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial] How to create a standard Java HTTP Client in ElasticSearch Why is everyone going crazy over WebAssembly?
Read more
  • 0
  • 0
  • 16989
Modal Close icon
Modal Close icon