Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-pytorch-1-2-is-here-with-a-new-torchscript-api-expanded-onnx-export-and-more
Bhagyashree R
12 Aug 2019
3 min read
Save for later

PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more

Bhagyashree R
12 Aug 2019
3 min read
Last week, the PyTorch team announced the release of PyTorch 1.2. This version comes with a new TorchScript API with improved Python language coverage, expanded ONNX export, a standard nn.Transformer module, and more. https://twitter.com/PyTorch/status/1159552940257923072 Here are some of the updates in PyTorch 1.2: A new TorchScript API TorchScript enables you to create models that are serializable and optimizable with PyTorch code. PyTorch 1.2 brings a new “easier-to-use TorchScript API” for converting nn.Modules into ScriptModules. The torch.jit.script will now recursively compile functions, methods, and classes that it encounters. The preferred way to create ScriptModules is torch.jit.script(nn_module_instance) instead of inheriting from torch.jit.ScriptModule. With this update, some of the items will be considered deprecated and developers are recommended not to use them in their new code. Among the deprecated components are the @torch.jit.script_method decorator, classes that inherit from torch.jit.ScriptModule, the torch.jit.Attribute wrapper class, and the __constants__ array. Also, TorchScript now has improved support for Python language constructs and Python's standard library. It supports iterator-based constructs such as for..in loops, zip(), and enumerate(). It also supports the math and string libraries and other Python builtin functions. Full support for ONNX Opset export The PyTorch team has worked with Microsoft to bring full support for exporting ONNX Opset versions 7, 8, 9, 10. PyTorch 1.2 includes the ability to export dropout, slice, flip and interpolate in Opset 10. ScriptModule is improved to include support for multiple outputs, tensor factories, and tuples as inputs and outputs. Developers will also be able to register their own symbolic to export custom ops, and set the dynamic dimensions of inputs during export. A standard nn.Transformer PyTorch 1.2 comes with a standard nn.Transformer module that allows you to modify the attributes as needed. Based on the paper Attention is All You Need, this module relies entirely on an attention mechanism for drawing global dependencies between input and output. It is designed in such a way that you can use its individual components independently. For instance, you can use its nn.TransformerEncoder API without the larger nn.Transformer. Breaking changes in PyTorch 1.2 The return dtype of comparison operations including lt, le, gt, ge, eq, ne is now changed to torch.bool instead of torch.uint8. The type of torch.tensor(bool) and torch.as_tensor(bool) is changed to torch.bool dtype instead of torch.uint8. Some of the linear algebra functions are now removed in favor of the renamed operations. Here’s a table listing all the removed operations and their alternatives for your quick reference: Source: PyTorch Check out the PyTorch release notes to know more in detail. PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Facebook open-sources PyText, a PyTorch based NLP modeling framework  
Read more
  • 0
  • 0
  • 23419

article-image-mozilla-introduces-neqo-rust-implementation-for-quic-new-http-protocol
Fatema Patrawala
24 Sep 2019
3 min read
Save for later

Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol

Fatema Patrawala
24 Sep 2019
3 min read
Two months ago, Mozilla introduced Neqo, code written in Rust to implement QUIC, a new protocol for the web developed on top of UDP instead of TCP. As per the GitHub page, web developers who want to run test on http 0.9 programs using neqo-client and neqo-server, below is the code: cargo build ./target/debug/neqo-server 12345 -k key --db ./test-fixture/db ./target/debug/neqo-client http://127.0.0.1:12345/ -o --db ./test-fixture/db While developers who want to run test on http 3 programs using neqo-client and neqo-http3-server must check the code given below: cargo build ./target/debug/neqo-http3-server [::]:12345 --db ./test-fixture/db ./target/debug/neqo-client http://127.0.0.1:12345/ --db ./test-fixture/db What is QUIC and why is it important for web developers According to Wikipedia, QUIC is the next generation encrypted-by-default transport layer network protocol designed by Jim Roskind at Google. It is designed to secure and accelerate web traffic on the Internet. It was implemented and deployed in 2012, and announced publicly in 2013 as an experimentation broadened and described to the IETF. While still an Internet Draft, QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. As per the QUIC’s official website, “QUIC is an IETF Working Group that is chartered to deliver the next transport protocol for the Internet.” One of the users on Hacker News commented, “QUIC is an entirely new protocol for the web developed on top of UDP instead of TCP. UDP has the advantage that it is not dependent on the order of the received packets, hence non-blocking unlike TCP. If QUIC is used, the TCP/TLS/HTTP2 stack is replaced to UDP/QUIC stack.”  The user further comments, “If QUIC features prove effective, those features could migrate into a later version of TCP and TLS (which have a notably longer deployment cycle). So basically, QUIC wants to combine the speed of the UDP protocol, with the reliability of the TCP protocol.” Additionally, the Rust community on Reddit were asked if QUIC is royalty free. To which one of the Rust developer responded, “Yes, it is being developed and standardized by a working group (under the IETF) and the IETF respectively. So it will become an internet standard just like UDP, TCP, HTTP, etc.” If you are interested to know more about Neqo and QUIC, check out the official GitHub page. Other interesting news in web development Chrome 78 beta brings the CSS Properties and Values API, the native file system API, and more! Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support, and more! Inkscape 1.0 beta is available for testing  
Read more
  • 0
  • 0
  • 23403

article-image-raspberry-pi-launches-it-last-board-for-the-foreseeable-future-the-raspberry-pi-3-model-a-available-now-at-25
Prasad Ramesh
16 Nov 2018
2 min read
Save for later

Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25

Prasad Ramesh
16 Nov 2018
2 min read
Yesterday, Raspberry launched the Raspberry Pi 3 Model A+ board which is a smaller and cheaper version of the Raspberry Pi 3B+. In 2014, the first gen Raspberry Pi 1 Model B+ was followed by a lighter Model A+ with half the RAM and removed ports. This was able to fit into their Hardware Attached on Top (HAT). Until now there were no such small form factor boards for the Raspberry Pi 2 and 3. Size is cut down but not the features (most of) The Raspberry Pi 3 Model A+ retains most of the features and enhancements as the bigger board of this series. This includes a 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU, 512MB LPDDR2 SDRAM, and dual-band 802.11ac wireless LAN and Bluetooth 4.2/BLE. The enhancements retained are improved USB mass-storage booting and improved thermal management. The entire Raspberry Pi 3 Model A+ board is an FCC certified radio module. This will significantly reduce the cost in conformance testing Raspberry Pi–based products. What is shrunk is the price which is now down to $25 and the board size of 65x56mm, the size of a HAT. Source: Raspberry website Raspberry Pi 3 Model A+ will likely be the last product for now In March this year, Raspberry said that the 3+ platform is the final iteration of the “classic” Raspberry Pi boards. The next steps/released products will be out of necessity and not an evolution. This is because for an evolution to happen Raspberry will need a new core silicon, on a new process node, with new memory technology. So this new board, the 3A+ is about closing things; meaning we won’t see any more products in this line, in the foreseeable future. This board does answer one of their most frequent customer requests for ‘missing products’. And clears their pipeline to focus on building the next generation of Raspberry Pi boards. For more details visit the Raspberry Pi website. Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 23295

article-image-php-7-4-releases-with-type-declarations-shorthand-syntax-in-arrow-functions-and-more
Vincy Davis
29 Nov 2019
2 min read
Save for later

PHP 7.4 releases with type declarations, shorthand syntax in Arrow functions, and more!

Vincy Davis
29 Nov 2019
2 min read
Yesterday, the PHP development team announced the availability of PHP version 7.4. This marks the fourth feature update to the PHP 7 series. PHP 7.4 comes with numerous improvements and new features. Key features in PHP 7.4 Class properties support type declarations. Starting from PHP 7.4, arrow functions will provide a shorthand syntax for defining functions with implicit by-value scope binding The full variance support is only available if autoloading is used by the user. Also, a single file will now only support non-cyclic type references. Numeric literals can contain underscores between digits. Weak references in PHP 7.4 will allow the programmers to retain a reference to an object that does not prevent the object from being destroyed. Users can now throw exceptions from __toString(). This was previously not permitted in PHP as it used to result in a fatal error. The CURLFile now supports stream wrappers in addition to plain file names. The FILTER_VALIDATE_FLOAT filter will support the min_range and max_range options, with the same semantics as FILTER_VALIDATE_INT. A new FFI extension is introduced. It will provide a simple way to call native functions, access native variables, and create/access data structures defined in C libraries. A new IMG_FILTER_SCATTER image filter is added to introduce a scatter filter to images. Read More: The Union Types 2.0 proposal gets a go-ahead for PHP 8.0 Users are happy with the new features in PHP 7.4 release. https://twitter.com/heiglandreas/status/1199989039249678337 To know the full list of changes, head over to the PHP archive page. Users can also check out the PHP manual to learn how to migrate from PHP 7.3.x to PHP 7.4.x. PEAR’s (PHP Extension and Application Repository) web server disabled due to a security breach Symfony leaves PHP-FIG, the framework interoperability group Google App Engine standard environment (beta) now includes PHP 7.2 Redox OS will soon permanently run rustc, the compiler for the Rust programming language, says Redox creator Jeremy Soller Homebrew 2.2 releases with support for macOS Catalina
Read more
  • 0
  • 0
  • 23281

article-image-kali-linux-2018-1-released
Savia Lobo
04 Apr 2018
2 min read
Save for later

Kali Linux 2018.1 released

Savia Lobo
04 Apr 2018
2 min read
Kali Linux 2018.1, the first of the many versions of Kali Linux for this year is now available. This release contains all the updates and bug fixes since the last version 2017.3, released in November 2017. The 2018.1 version is boosted by the new Linux 4.14.12 kernel. This brings in an added support for newer hardware and an improved performance. This means, ethical hackers and penetration testers can now use Kali in a more efficient manner to enhance security.   The release also has two exceptional features which include, AMD Secure Memory Encryption, a new feature in the AMD processors that enables automatic encryption and decryption of DRAM. The addition of this feature means that systems will no longer be vulnerable to cold-boot attacks because, even with physical access, the memory will be not be readable. Increased Memory Limits – This release also includes a support for 5-level paging, a new feature of the upcoming processors. These new processors will support 4 PB (petabytes) of physical memory and 128 PB of virtual memory. Several packages including zaproxy, secure-socket-funneling, pixiewps, seclists, burpsuite, dbeaver, and reaver have been updated in Kali 2018.1. Also, for those using Hyper-V to run Kali virtual machines provided by Offensive Security, the Hyper-V virtual machine is now generation 2. This means, the Hyper-V VM is now UEFI-based and supports expanding/shrinking of HDD. The generation 2 also includes Hyper-V integration services, which supports Dynamic Memory, Network Monitoring/Scaling, and Replication. Know more about Kali’s latest release on the Kali Linux Blog.
Read more
  • 0
  • 0
  • 23223

article-image-tesseract-version-4-0-releases-with-new-lstm-based-engine-and-an-updated-build-system
Natasha Mathur
30 Oct 2018
2 min read
Save for later

Tesseract version 4.0 releases with new LSTM based engine, and an updated build system

Natasha Mathur
30 Oct 2018
2 min read
Google released version 4.0 of its OCR engine, Tesseract, yesterday. Tesseract 4.0 comes with a new neural net (LSTM) based OCR engine, updated build system, other improvements, and bug fixes. Tesseract is an OCR engine that offers support for unicode (a specification that supports all character set) and comes with an ability to recognize more than 100 languages out of the box. It can be trained to recognize other languages and is used for text detection on mobile devices, videos, and in Gmail image spam detection. Let’s have a look at what's new in Tesseract 4.0. New neural net (LSTM) based OCR engine The new OCR engine uses a neural network system based on LSTMs, with major accuracy gains. This consists of new training tools for the LSTM OCR engine. You can train a new model from scratch or by fine-tuning an existing model. Trained data including LSTM models and 123 languages have been added to the new OCR engine. Optional accelerated code paths have been added for the LSTM recognizer: Moreover, a new parameter lstm_choice_mode that allows including alternative symbol choices in the hOCR output has been added. Updated Build System Tesseract 4.0 uses semantic versioning and requires Leptonica 1.74.0 or a higher version. In case you want to build Tesseract from source code then a compiler with strong C++ 11 support is necessary. Unit tests have been added to the main repo. Tesseract's source tree has been reorganized in version 4.0. A new option has been added that lets you compile Tesseract without the code of the legacy OCR engine. Bug Fixes Issues in trainingdata rendering have been fixed. Damage caused to binary images when processing PDFs has been fixed. Issues in the OpenCL code have been fixed. OpenCL now works fine for the legacy Tesseract OCR engine but the performance hasn’t improved yet. Other Improvements Multi-page TIFF handling is improved in Tesseract 4.0. Improvements are made to PDF rendering. The version information and improved help texts have been added to the training tools. tessedit_pageseg_mode 1 has been removed from hocr, pdf, and tsv config files. The user has to now explicitly use --psm 1 if that is desired. For more information, check out the official release notes. Tesla v9 to incorporate neural networks for autopilot Neural Network Intelligence: Microsoft’s open source automated machine learning toolkit
Read more
  • 0
  • 0
  • 23182
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-net-core-3-0-and-net-framework-4-8-more-details-announced
Prasad Ramesh
05 Oct 2018
4 min read
Save for later

.NET Core 3.0 and .NET Framework 4.8 more details announced

Prasad Ramesh
05 Oct 2018
4 min read
.NET Core 3.0 was announced in May this year, it adds support for building desktop applications using WinForms, WPF, and Entity Framework 6. Updates to .NET Framework were also announced which enable use of new modern controls from UWP in existing WinForms and WPF applications. Now, more details are out on both of them. .NET Core 3.0 .NET Core 3.0 addresses three scenarios asked by the .NET framework developer community. Multiple versions of .NET on the same machine As of now, only one version of .NET Framework can be installed on a machine. An update to the .NET Framework poses a risk of a security fix, bug fix, or new API breaking applications on the machine. Now Microsoft aims to solve this problem by allowing multiple versions of .NET Core  to reside on one machine. The applications that need to be stable can be locked to one of the stable versions then later on be moved to use the newer version as it is ready. Embedding .NET directly into an application Since there can only be one version of .NET Framework on a machine, to take advantage of the latest framework or language features, the newer version needs to be installed. With .NET Core, you can now ship the framework as a part of an application. This enables developers to take advantage of the new features of the latest version without having to wait for the framework to install. Taking advantage of .NET Core features The side-by-side nature of .NET enables introduction of new innovative APIs and Base Class Library (BCL) improvements without the risk of breaking compatibility. WinForms and WPF applications on Windows can now take advantage of the latest .NET Core features. These features include more fundamental fixes for a better high-DPI support. .NET Framework 4.8 .NET Framework 4.8 also addresses three scenarios asked for by the .NET Framework developer community. Modern browser and media controls .NET desktop applications use the Internet Explorer and Windows Media Player for displaying HTML and playing media files. These legacy controls don’t show the latest HTML or play the latest media files. Hence, Microsoft is adding new controls to advantage of Microsoft Edge and newer media players thereby supporting the latest standards. Access to touch and UWP Controls The Universal Windows Platform (UWP) contains new controls to take advantage of the latest Windows features and the devices with touch displays. The code in your application does not have to be rewritten to use these new features and controls. Microsoft is going to make them available to WinForms and WPF enabling the developers to take advantage of these new features in the existing code of applications created. Improvements for high DPI The standard resolution of computer displays is steadily becoming 4K and now even 8K resolutions are available. WIth the newer versions WinForms and WPF applications will look great on these high resolution displays. The future of .NET .NET Framework is installed over one billion machines, hence even a security fix introducing a bug will affect a lot of devices. .NET Core is a fast-moving version of .NET. Because of its side-by-side nature it can take changes that can prove very risky in .NET Framework. Meaning, .NET Core is bound to get new APIs and language features over time that .NET Framework cannot. If your existing applications are on .NET Framework, there is no immediate need to move to .NET Core. For more details, visit the Microsoft Blog. .NET Core 2.0 reaches end of life, no longer supported by Microsoft .NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5 Microsoft’s .NET Core 2.1 now powers Bing.com
Read more
  • 0
  • 0
  • 23104

article-image-introducing-espresso-an-open-source-pytorch-based-end-to-end-neural-automatic-speech-recognition-asr-toolkit-for-distributed-training-across-gpus
Amrata Joshi
24 Sep 2019
5 min read
Save for later

Introducing ESPRESSO, an open-source, PyTorch based, end-to-end neural automatic speech recognition (ASR) toolkit for distributed training across GPUs

Amrata Joshi
24 Sep 2019
5 min read
Last week, researchers from USA and China released a paper titled ESPRESSO: A fast end-to-end neural speech recognition toolkit. In the paper, the researchers have introduced ESPRESSO, an open-source, modular, end-to-end neural automatic speech recognition (ASR) toolkit. This toolkit is based on PyTorch library and FAIRSEQ, the neural machine translation toolkit. This toolkit supports distributed training across GPUs and computing nodes and decoding approaches that are commonly employed in ASR such as look-ahead word-based language model fusion. ESPRESSO is 4 to 11 times faster for decoding than similar systems like ESPNET and it achieves state-of-the-art ASR performance on data sets such as LibriSpeech, WSJ, and Switchboard. Limitations of ESPnet  ESPnet, an end-to-end speech processing toolkit, has some limitations: The code ESPnet is not easily extensible and also has issues related to portability due to its mixed dependency on PyTorch and Chainer, the deep learning frameworks. It uses a decoder which is based on a slow beam search algorithm that is not fast enough for quick turnaround of experiments. To address the above problems, the researchers introduced ESPRESSO. With ESPRESSO it is possible to plug new modules into the system by extending standard PyTorch interfaces.  The research paper reads, “We envision that ESPRESSO could become the foundation for unified speech + text processing systems, and pave the way for future end-to-end speech translation (ST) and text-to-speech synthesis (TTS) systems, ultimately facilitating greater synergy between the ASR and NLP research communities.” ESPRESSO is built on design goals The researchers implemented ESPRESSO based on certain design goals in mind. Firstly, they made use of pure Python / PyTorch for enabling modularity and extensibility. To speed up the experiments, the researchers implemented parallelization, distributed training and decoding. They achieved compatibility with Kaldi / ESPNET data format in order to reuse previous / proven data preparation pipelines. They made ESPRESSO exhibit interoperability with the existing FAIRSEQ codebase in order to make future joint research areas between speech and NLP, easy. ESPRESSO’s dataset classes The speech data for ESPRESSO follows the format in Kaldi, a speech recognition toolkit where utterances get stored in the Kaldi-defined SCP format. The researchers have followed ESPNET and have used the 80-dimensional log Mel feature along with the additional pitch features (83 dimensions for each frame).  ESPRESSO also follows FAIRSEQ’s concept of “datasets” that contains a set of training samples and abstracts. Based on the same concept, the researchers have created dataset classes in ESPRESSO: data.ScpCachedDataset This dataset contains the real-valued acoustic features that are extracted from the speech utterance. The training batch that is drawn from this dataset is a real-valued tensor of shape [BatchSize × TimeFrameLength × FeatureDims] and it will be fed to the neural speech encoder. As the acoustic features are large and they cannot be loaded into memory all at once, the researchers also implement sharded loading where bulk of features are pre-loaded once the previous bulk is consumed for training/decoding. This also balances the file system’s I/O load as well as memory usage. data.TokenTextDataset This dataset contains the gold speech transcripts as text where the training batches are an integer-valued tensor of shape [BatchSize × SequenceLength]. data.SpeechDataset data.SpeechDataset is a container for the above-mentioned datasets. The samples drawn from this dataset contain two fields including source and target and points to the speech utterance and gold transcripts respectively.  Achieving state-of-the-art ASR performance on LibriSpeech, WSJ, and Switchboard datasets ESPRESSO provides running recipes for a variety of data sets. The researchers have given details about their recipes on Wall Street Journal (WSJ), an 80-hour English newspaper speech corpus, Switchboard (SWBD), a 300-hour English telephone speech corpus and LibriSpeecha corpus which is of approximately 1,000 hours of English speech.  The data sets for ESPRESSO have their own extra text corpus that is used for training language models. These are models are optimized using Adam, a method used for stochastic optimization, with an initial learning rate 10−3. This rate is halved if the metric on the validation set at the end of an epoch does not show an improvement over the previous epoch. In case, the learning rate is less than 10−5, Also the training process stops.  Curriculum learning is used for LibriSpeech or WSJ / SWBD epochs, as it prevents training divergence and improves performance. NVIDIA GeForce GTX 1080 Ti GPUs is used for training/evaluating the models. In this paper, all the models are trained with 2 GPUs by using FAIRSEQ built-in distributed data parallellism.  To conclude, the researchers have presented ESPRESSO toolkit in this paper and has provided ASR recipes for LibriSpeech, WSJ, and Switchboard datasets. The paper reads, “By sharing the underlying infrastructure with FAIRSEQ, we hope ESPRESSO will facilitate future joint research in speech and natural language processing, especially in sequence transduction tasks such as speech translation and speech synthesis.” To know more about ESPRESSO in detail, check out the paper. Other interesting news in programming Nim 1.0 releases with improved library, backward compatibility and more Dgraph releases Ristretto, a fast, concurrent and memory-bound Go cache library .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3
Read more
  • 0
  • 0
  • 23095

article-image-deepmind-ais-alphastar-achieves-grandmaster-level-in-starcraft-ii-with-99-8-efficiency
Vincy Davis
04 Nov 2019
5 min read
Save for later

DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency

Vincy Davis
04 Nov 2019
5 min read
Earlier this year in January, Google’s DeepMind AI AlphaStar had defeated two professional players, TLO and MaNa, at StarCraft II, a real-time strategy game. Two days ago, DeepMind announced that AlphaStar has now achieved the highest possible online competitive ranking, called Grandmaster level, in StarCraft II. This makes AlphaStar the first AI to reach the top league of a widely popular game without any restrictions. AplhaStar used the multi-agent reinforcement learning technique and rated above 99.8% of officially ranked human players. It was able to achieve the Grandmaster level for all the three StarCraft II races - Protoss, Terran, and Zerg. The DeepMind researchers have published the details of AlphaStar in the paper titled, ‘Grandmaster level in StarCraft II using multi-agent reinforcement learning’. https://twitter.com/DeepMindAI/status/1189617587916689408 How did AlphaStar achieve the Grandmaster level in StarCraft II? The DeepMind researchers were able to develop a robust and flexible agent by understanding the potential and limitations of open-ended learning. This helped the researchers to make AlphaStar cope with complex real-world domains. “Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales,” states the blog post. The StarCraft II video game requires players to balance high-level economic decisions with individual control of hundreds of units. When playing this game, humans are under physical constraints which limits their reaction time and their rate of actions. Accordingly, AphaStar was also imposed with these constraints, thus making it suffer from delays due to network latency and computation time. In order to limit its actions per minute (APM), AphaStar’s peak statistics were kept substantially lower than those of humans. To align with the standard human movement, it also had a limited viewing of the portion of the map, AlphaStar could register only a limited number of mouse clicks and had only 22 non-duplicated actions to play every five seconds. AlphaStar uses a combination of general-purpose techniques like neural network architectures, imitation learning, reinforcement learning, and multi-agent learning. The games were sampled from a publicly available dataset of anonymized human replays, which were later trained to predict the action of every player. These predictions were then used to procure a diverse set of strategies to reflect the different modes of human play. Read More: DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Dario “TLO” WÜNSCH, a professional starcraft II player says, “I’ve found AlphaStar’s gameplay incredibly impressive – the system is very skilled at assessing its strategic position, and knows exactly when to engage or disengage with its opponent. And while AlphaStar has excellent and precise control, it doesn’t feel superhuman – certainly not on a level that a human couldn’t theoretically achieve. Overall, it feels very fair – like it is playing a ‘real’ game of StarCraft.” According to the paper, AlphaStar had the 1026 possible actions available at each time step, thus it had to make thousands of actions before learning if it has won or lost the game. One of the key strategies behind AlphaStar’s performance was learning human strategies. This was necessary to ensure that the agents keep exploring those strategies throughout self-play. The researchers say, “To do this, we used imitation learning – combined with advanced neural network architectures and techniques used for language modeling – to create an initial policy which played the game better than 84% of active players.” AlphaStar also uses a latent variable to encode the distribution of opening moves from human games. This helped AlphaStar to preserve the high-level strategies and enabled it to represent many strategies within a single neural network. By training the advances in imitation learning, reinforcement learning, and the League, the researchers were able to train AlphaStar Final, the agent that reached the Grandmaster level at the full game of StarCraft II without any modifications. AlphaStar used a camera interface, which helped it get the exact information that a human player would receive. All the interface and restrictions faced by AlphaStar were approved by a professional player. Finally, the results indicated that general-purpose learning techniques can be used to scale AI systems to work in complex and dynamic environments involving multiple actors. AlphaStar’s great feat has got many people excited about the future of AI. https://twitter.com/mickdooit/status/1189604170489315334 https://twitter.com/KaiLashArul/status/1190236180501139461 https://twitter.com/JoshuaSpanier/status/1190265236571459584 Interested readers can read the research paper to check AlphaStar’s performance. Head over to DeepMind’s blog for more details. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
Read more
  • 0
  • 0
  • 23051

article-image-alibabas-chipmaker-launches-open-source-risc-v-based-xuantie-910-processor-for-5g-ai-iot-and-self-driving-applications
Vincy Davis
26 Jul 2019
4 min read
Save for later

Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications

Vincy Davis
26 Jul 2019
4 min read
Launched in 2018, Alibaba’s chip subsidiary, Pingtouge made a major announcement yesterday. Pingtouge is launching its first product - chip processor XuanTie 910 using the open-source RISC-V instruction set architecture. The XuanTie 910 processor is expected to reduce the costs of related chip production by more than 50%, reports Caixin Global. XuanTie 910, also known as T-Head, will soon be available in the market for commercial use. Pingtouge will also be releasing some of XuanTie 910’s codes on Github for free to help the global developer community to create innovative applications. No release dates have been revealed yet. What are the properties of the XuanTie 910 processor? The XuanTie 910 16-core processor has 7.1 Coremark/MHz and its main frequency can achieve 2.5GHz. This processor can be used to manufacture high-end edge-based microcontrollers (MCUs), CPUs, and systems-on-chip (SOC). It can be used in applications like 5G telecommunication, artificial intelligence (AI), and autonomous driving. XuanTie 910 processor gives 40% increased performance over the mainstream RISC-V instructions and also a 20% increase in terms of instructions. According to Synced, Xuantie 910 has two unconventional properties: It has a 2-stage pipelined out-of-order triple issue processor with two memory accesses per cycle. The processors computing, storage and multi-core capabilities are superior due to an increased extension of instructions. Xuantie 910 can extend more than 50 instructions than RISC-V. Last month, The Verge reported that an internal ARM memo has instructed its staff to stop working with Huawei. With the US blacklisting China’s telecom giant Huawei, and also banning any American company from doing business with them, it seems that ARM is also following the American strategy. Although ARM is based in U.K. and is owned by the Japanese SoftBank group, it does have an “US origin technology”, as claimed in the internal memo. This may be one of the reasons why Alibaba is increasing its efforts in developing RISC-V, so that Chinese tech companies can become independent from Western technologies. A Xuantie 910 processor can assure Chinese companies of a stable future, with no fear of it being banned by Western governments. Other than being cost-effective, RISC-V also has other advantages like more flexibility compared to ARM. With complex licence policies and high power prospect, it is going to be a challenge for ARM to compete against RISC-V and MIPS (Microprocessor without Interlocked Pipeline Stages) processors. A Hacker News user comments, “I feel like we (USA) are forcing China on a path that will make them more competitive long term.” Another user says, “China is going to be key here. It's not just a normal market - China may see this as essential to its ability to develop its technology. It's Made in China 2025 policy. That's taken on new urgency as the west has started cutting China off from western tech - so it may be normal companies wanting some insurance in case intel / arm cut them off (trade disputes etc) AND the govt itself wanting to product its industrial base from cutoff during trade disputes” Some users also feel that it is technology that wins when two big economies continue bringing up innovative technologies. A comment on Hacker News reads, “Good to see development from any country. Obviously they have enough reason to do it. Just consider sanctions. They also have to protect their own market. Anyone that can afford it, should do it. Ultimately it is a good thing from technology perspective.” Not all US tech companies are wary of partnering with Chinese counterparts. Two days ago, Salesforce, an American cloud-based software company announced a strategic partnership with Alibaba. This aims to help Salesforce localize their products in mainland China, Hong Kong, Macau, and Taiwan. This will enable Salesforce customers to market, sell, and operate through services like Alibaba Cloud and Tmall. Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals The US Justice Department opens a broad antitrust review case against tech giants Salesforce is buying Tableau in a $15.7 billion all-stock deal
Read more
  • 0
  • 0
  • 23048
article-image-angular-cli-8-3-0-releases-with-a-new-deploy-command-faster-production-builds-and-more
Bhagyashree R
26 Aug 2019
3 min read
Save for later

Angular CLI 8.3.0 releases with a new deploy command, faster production builds, and more

Bhagyashree R
26 Aug 2019
3 min read
Last week, the Angular team announced the release of Angular CLI 3.8.0. Along with a redesigned website, this release comes with a new deploy command and improves previously introduced differential loading. https://twitter.com/angular/status/1164653064898277378 Key updates in Angular CLI 8.3.0 Deploy directly from CLI to a cloud platform with the new deploy command Starting from Angular CLI 8.3.0, you have a new deploy command to execute the deploy CLI builder associated with your project. It is essentially a simple alias to ng run MY_PROJECT:deploy. There are many third-party builders that implement deployment capabilities to different platforms that you can add to your project with ng add [package name]. After this package with the deployment capability is added, your project’s angular.json file is automatically updated with a deploy section. You can then simply deploy your project by executing the ng deploy command. Currently, the deploy command supports deployment to Firebase, Azure, Zeit, Netlify, and GitHub. You can also create a builder yourself to use the ng deploy command in case you are deploying to a self-managed server or there’s no builder for the cloud platform you are using. Improved differential loading Angular CLI 8.0 introduced the concept of differential loading to maximize browser compatibility of your web application. Most of the modern browsers today support ES2015, but there might be cases when your app users have a browser that doesn’t. To target a wide range of browsers, you can use polyfill scripts for the browsers. You can ship a single bundle containing all your compiled code and any polyfills that may be needed. However, this increased bundle size shouldn’t affect users who have modern browsers. This is where differential loading comes in where the CLI builds two separate bundles as part of your deployed application. The first bundle will target modern browsers, while the second one will target the legacy browser with all necessary polyfills. Though this increases your application’s browser compatibility, the production build ends up taking twice the time. Angular CLI 8.3.0 fixes this by changing how the command runs. Now, the build targeting ES2015 is built first and then it is directly down leveled to ES5, instead of rebuilding the app from scratch. In case you encounter any issue, you can fall back to the previous behavior with NG_BUILD_DIFFERENTIAL_FULL=true ng build --prod. Many Angular developers are excited about the new updates in Angular CLI 8.3.0. https://twitter.com/vikerman/status/1164655906262409216 https://twitter.com/Santosh19742211/status/1164791877356277761 While some did question the usefulness of the deploy command. A developer on Reddit shared their perspective, “Honestly, I think Angular and the CLI are already big and complex enough. Every feature possibly creates bugs and needs to be maintained. While the CLI is incredibly useful and powerful there have been also many issues in the past. On the other hand, I must admit that I can't judge the usefulness of this feature: I've never used Firebase. Is it really so hard to deploy on it? Can't this be done with a couple of lines of a shell script? As already said: One should use CI/CD anyway.” To know more in detail about the new features in Angular CLI 8.3.0, check out the official docs. Also, check out the @angular-schule/ngx-deploy-starter repository to create a new builder for utilizing the deploy command. Angular 8.0 releases with major updates to framework, Angular Material, and the CLI Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 23041

article-image-azure-functions-3-0-released-with-support-for-net-core-3-1
Savia Lobo
12 Dec 2019
2 min read
Save for later

Azure Functions 3.0 released with support for .NET Core 3.1!

Savia Lobo
12 Dec 2019
2 min read
On 9th December, Microsoft announced that the go-live release of the Azure Functions 3.0 is now available. Among many new capabilities and functionality added to this release, one amazing addition is the support for the newly released .NET Core 3.1 -- an LTS (long-term support) release -- and Node 12. With users having the advantage to build and deploy 3.0 functions in production, the Azure Functions 3.0 bring newer capabilities including the ability to target .NET Core 3.1 and Node 12, higher backward compatibility for existing apps running on older language versions, without any code changes. “While the runtime is now ready for production, and most of the tooling and performance optimizations are rolling out soon, there are still some tooling improvements to come before we announce Functions 3.0 as the default for new apps. We plan to announce Functions 3.0 as the default version for new apps in January 2020,” the official announcement mentions. While users running on earlier versions of Azure Functions will continue to be supported, the company does not plan to deprecate 1.0 or 2.0 at present. “Customers running Azure Functions targeting 1.0 or 2.0 will also continue to receive security updates and patches moving forward—to both the Azure Functions runtime and the underlying .NET runtime—for apps running in Azure. Whenever there’s a major version deprecation, we plan to provide notice at least a year in advance for users to migrate their apps to a newer version,” Microsoft mentions. https://twitter.com/rickvdbosch/status/1204115191367114752 https://twitter.com/AzureTrenches/status/1204298388403044353 To know more about this in detail, read Azure Functions’ official documentation. Creating triggers in Azure Functions [Tutorial] Azure Functions 2.0 launches with better workload support for serverless Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 23040

article-image-how-netflix-uses-ava-an-image-discovery-tool-to-find-the-perfect-title-image-for-each-of-its-shows
Melisha Dsouza
04 Sep 2018
5 min read
Save for later

How Netflix uses AVA, an Image Discovery tool to find the perfect title image for each of its shows

Melisha Dsouza
04 Sep 2018
5 min read
Netflix, the video-on-demand streaming company, has seen a surge in its growing number of users every day as well as in the viewership of its TV shows. It is constantly striving to provide an enriching experience to its viewers. To keep pace with the ever-increasing demands of user experience, Netflix is introducing a collection of tools and algorithms to make its content more audience relevant. AVA( Aesthetic Visual Analysis)- analyses large volumes of images obtained from video frames of a particular TV show to set as the title image for that show. Netflix understands that a more visually appealing title image plays an incredibly important role assisting a viewer find new shows and movies to watch. How title images are selected normally Usually, content editors had to go through tens of thousands of video frames for a show, to select a good title image. To give you a gist of the effort required-  a single one-hour episode of ‘Stranger Things’, consists of nearly 86,000 static video frames. Imagine sieving through each one of these frames painstakingly to find the perfect title image that will not only connect with the viewers, but also give them a gist of the storyline. To top it all up, the number of frames can go up to a million depending on the number of episodes in a show. This task of manually screening the frames is almost impossible and labor intensive, if not ineffective. Additionally, the editors choosing the image stills require an in-depth expertise of the source content that they’re intended to represent. Considering Netflix has an exponentially increasing catalog of shows, this presents a very challenging expectation for the editors to surface meaningful images from videos. Enter AVA, using its image classification algorithms for sorting the right image at the right time. What is AVA? The ever-growing number of images on the internet space has led to challenges in its processing and classification. To address this concern, a research team from University of Barcelona, Spain in collaboration with Xerox corporation has developed a method called Aesthetic Visual Analysis (AVA) as a research project. The project contains a vast database of over 2.5 lakh images combined with metadata such as aesthetic scores for images semantic labels for more than 60 classifications of images and many other characteristics. Using statistical concepts like standard deviation, mean score and variance, AVA rates images. Based on the distributions computed from these statistics, they assess the semantic challenges and choose the right images for the database. AVA primarily alleviates the issues of extensive benchmarking and trains more images. They also enable images to get a better aesthetic appeal. Computing performance can be significantly optimised to have lesser impact on the hardware. You can get more insights by reading the Research paper. The ‘AVA’ approach used at Netflix The process takes place in 3 steps: AVA starts by analysing images obtained through the process of frame annotation. This includes processing and annotating many different variables on every individual frame of video to best derive what the frame contains, and to understand its importance to the story. To keep up pace with the growing catalog of content, Netflix uses the Archer framework to process videos more efficiently. Archer splits the video into very tiny bits to aid parallel video processing. After the frames are obtained, they are subjected to a series of image recognition algorithms to build metadata. Metadata is further classified as visual, contextual and composition metadata.  To give you a brief overview- Visual Metadata: For brightness, sharpness and color Contextual Metadata: This is a combination of elements that are combined to derive meaning from the actions or movement of the actors, objects and camera in the frame. Eg: face detection, Motion estimation, Object Detection and camera shot identification Composition Metadata: For intricate image details based on core principles in photography, cinematography and visual aesthetic design such as depth of field and symmetry. Choosing the right Picture! The ‘best’ image is chosen considering three important aspects– the lead actors, visual range and sensitivity filters. Emphasis is given first to lead actors of the show since they make a visual impact. In order to identify the key character for a given episode, AVA utilizes a combination of face clustering and actor recognition to filter main characters from secondary characters or extras. The next thing, is the diversity of the images present in the video frames which includes camera positions, image details such as brightness, color, contrast to name a few. Keeping these in mind, image frames are easy to group based on similarities. This helps in developing image support vectors. The vectors primarily assist in designing an image diversity index where all the relevant images collected for an episode or even a movie can be scored based on visual appeal. Sensitive factors such as violence, nudity and advertisements are filtered and are allotted low priority in the image vectors. This way they are screened out completely in the process. Source: Netflix Blog What's in this for Netflix and its users? Netflix’s decision to use AVA will not only save manual labour, but also reduce the cost involved in having manpower source through millions of images in order to get that one perfect shot. This unique approach will help in obtaining meaningful images from video and thus enable creative teams to invest time in designing stunning artwork . As for its users, a good title image means establishing a deeper connection to the show’s characters and storyline, thus improving their overall experience. To understand the intricate  workings of AVA, you can read Netflix engineering team’s original post on this topic on Medium. How everyone at Netflix uses Jupyter notebooks from data scientists, machine learning engineers, to data analysts Netflix releases FlameScope Netflix bring in Verna Myers as new VP of Inclusion strategy to boost cultural diversity
Read more
  • 0
  • 0
  • 22970
article-image-introducing-microsofts-airsim-an-open-source-simulator-for-autonomous-vehicles-built-on-unreal-engine
Bhagyashree R
19 Sep 2019
4 min read
Save for later

Introducing Microsoft’s AirSim, an open-source simulator for autonomous vehicles built on Unreal Engine

Bhagyashree R
19 Sep 2019
4 min read
Back in 2017, the Microsoft Research team developed and open-sourced Aerial Informatics and Robotics Simulation (AirSim). On Monday, the team shared how AirSim can be used to solve the current challenges in the development of autonomous systems. Microsoft AirSim and its features Microsoft AirSim is an open-source, cross-platform simulation platform for autonomous systems including autonomous cars, wheeled robotics, aerial drones, and even static IoT devices. It works as a plugin for Epic Games’ Unreal Engine. There is also an experimental release for the Unity game engine. Here is an example of drone simulation in AirSim: https://www.youtube.com/watch?v=-WfTr1-OBGQ&feature=youtu.be AirSim was built to address two main problems developers face during the development of autonomous systems. First, the requirement of large datasets for training and testing the systems and second, the ability to debug in a simulator. With AirSim, the team aims to equip developers with a platform that has various training experiences so that the autonomous systems could be exposed to different scenarios before they are deployed in the real-world. “Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform-independent way,” the team writes. AirSim provides physically and visually realistic simulations by supporting hardware-in-the-loop simulation with popular flight controllers such as PX4, an open-source autopilot system. It can be easily extended to accommodate various new types of autonomous vehicles, hardware platforms, and software protocols. Its extensible architecture also allows them to quickly add custom autonomous system models and new sensors to the simulator. AirSim for tackling the common challenges in the autonomous systems’ development In April, the Microsoft Research team collaborated with Carnegie Mellon University and Oregon State University, collectively called Team Explorer, to solve the DARPA Subterranean (SubT) Challenge. The challenge was to build robots that can autonomously map, navigate, and search underground environments during time-sensitive combat operations or disaster response scenarios. On Monday, Microsoft’s Senior Research Manager, Ashish Kapoor shared how they used AirSim to solve this challenge. Team Explorer and Microsoft used AirSim to create an “intricate maze” of man-made tunnels in a virtual world. To create this maze the team used reference material from real-world mines to modularly generate a network of interconnected tunnels. This was a high-definition simulation of man-made tunnels that also included robotic vehicles and a suite of sensors. AirSim also provided a rich platform that Team Explorer could use to test their methods along with generating training experiences for creating various decision-making components for autonomous agents. Microsoft believes that AirSim can also help accelerate the creation of a real dataset for underground environments. “Microsoft’s ability to create near-realistic autonomy pipelines in AirSim means that we can rapidly generate labeled training data for a subterranean environment,” Kapoor wrote. Kapoor also talked about another collaboration with Air Shepherd and USC to help counter wildlife poaching using AirSim. In this collaboration, they developed unmanned aerial vehicles (UAVs) equipped with thermal infrared cameras that can fly through national parks to search for poachers and animals. AirSim was used to create a simulation of this use case, in which virtual UAVs flew over virtual environments at an altitude from 200 to 400 feet above ground level. “The simulation took on the difficult task of detecting poachers and wildlife, both during the day and at night, and ultimately ended up increasing the precision in detection through imaging by 35.2%,” the post reads. These were some of the recent use cases where AirSim was used. To explore more and to contribute you can check out its GitHub repository. Other news in Data 4 important business intelligence considerations for the rest of 2019 How artificial intelligence and machine learning can help us tackle the climate change emergency France and Germany reaffirm blocking Facebook’s Libra cryptocurrency
Read more
  • 0
  • 0
  • 22938

article-image-how-to-easily-grant-permissions-to-all-databases-from-blog-posts-sqlservercentral
Anonymous
29 Dec 2020
6 min read
Save for later

How to Easily Grant Permissions to all Databases from Blog Posts - SQLServerCentral

Anonymous
29 Dec 2020
6 min read
A recurring need that I have seen is a means to grant a user or group of users access to all databases in one fell swoop. Recently, I shared an introductory article to this requirement. In this article, I will demonstrate how to easily grant permissions to all databases via the use of Server Roles. When talking about Server Roles, I don’t mean the Fixed Server Roles. It would be crazy easy, insane, and stupid to just use the fixed server roles to grant users access to all databases. Why? Well, only two of the fixed server roles would cover the permission scope needed by most users to access a database – and use said database. Those roles are sysadmin and securityadmin. The Bad, the Bad, and the Ugly The sysadmin role should be fairly obvious and is generally what every vendor and a majority of developers insists on having. We all know how dangerous and wrong that would be. The securityadmin fixed server role on the other hand is less obvious. That said, securityadmin can grant permissions and should therefore be treated the same as sysadmin. By no means do we ever want to grant access (as a shortcut / easy method) via these roles, that would be security-defeating. There is one more role that seems to be a popular choice – the public role. Visualize a child’s eyes rolling into the back of their head and you have my reaction to this option. This is the ugly of the options but cannot go without mentioning because I deal with vendors on a regular basis that continue to insist on doing things this way. This is not necessarily the easy method because you have to manually grant permissions to the public fixed server role, so this comes with some work but it is just flat stupid and lazy to grant all of these permissions to the public role. Here is an article on this absurd method. Create Custom Server Roles for all DB Access The pre-requisite for there to be an easy button is to create your own Server-Level role. I demonstrated how to do this in a previous article and will glaze over it quickly again here. IF NOT EXISTS ( SELECT name FROM sys.server_principals WHERE name = 'Gargouille' ) BEGIN CREATE LOGIN [Gargouille] WITH PASSWORD = N'SuperDuperLongComplexandHardtoRememberPasswordlikePassw0rd1!' , DEFAULT_DATABASE = [] , CHECK_EXPIRATION = OFF , CHECK_POLICY = OFF; END; --check for the server role IF NOT EXISTS ( SELECT name FROM sys.server_principals WHERE name = 'SpyRead' AND type_desc = 'SERVER_ROLE' ) BEGIN CREATE SERVER ROLE [SpyRead] AUTHORIZATION [securityadmin]; GRANT CONNECT ANY DATABASE TO [SpyRead]; END; USE master; GO IF NOT EXISTS ( SELECT mem.name AS MemberName FROM sys.server_role_members rm INNER JOIN sys.server_principals sp ON rm.role_principal_id = sp.principal_id LEFT OUTER JOIN sys.server_principals mem ON rm.member_principal_id = mem.principal_id WHERE sp.name = 'SpyRead' AND sp.type_desc = 'SERVER_ROLE' AND mem.name = 'Gargouille' ) BEGIN ALTER SERVER ROLE [SpyRead] ADD MEMBER [Gargouille]; END; In this demo script, I have created a user and a server-level role, then added that user to the role. The only permission currently on the server-level role is “Connect Any Database”. Now, let’s say we need to be able to grant permissions to all databases for this user to be able to read (that would be SELECT in SQL terms) data. The only thing I need to do for the role would be to make this permission change. GRANT SELECT ALL USER SECURABLES TO [SpyRead]; That is a far simpler approach, right? Let’s see how it might look to add a user to the role from SQL Server Management Studio (SSMS). After creating a custom server role, you will be able to see it from the login properties page and then add the login directly to the role from the gui. That makes the easy button just a little bit better. Test it out Now, let’s test the permission to select from a database. EXECUTE AS LOGIN = 'Gargouille'; GO USE []; GO -- no permissions on server state SELECT * FROM sys.dm_os_wait_stats; GO --Yet can select from any database SELECT USER_NAME() SELECT * FROM sys.objects REVERT Clearly, you will need to change your database name unless by some extreme chance you also have a database by the name of . Testing this will prove that the user can connect to the database and can also select data from that database. Now this is where it gets a little dicey. Suppose you wish to grant the delete option (not a super wise idea to be honest) to a user in every database. That won’t work with this method. You would need to grant those permissions on a per case basis. Where this solution works best is for permissions that are at the server scope. Permissions at this scope include the things such as “Control Server”, “View any Definition”, “View Server State”, and “Select all USER Securables”. This isn’t a complete list but just enough to give you an idea. That said, how often do you really need to have a user be able to change data in EVERY database on a server? I certainly hope your security is not setup in such a fashion. Caveat Suppose you decide to utilize the permission “SELECT ALL USER SECURABLES”, there is an additional feature that comes with it. This permission can be used to deny SELECT permission against all databases as well. As a bonus, it works to block sysadmins as well – sort of. It does deny the SELECT permission, unlike other methods, when applied to a sysadmin, however, any sysadmin worth their salt can easily revoke that permission because they have “CONTROL” server permission. That said, it would be a worthwhile trick to play on your junior dbas to see what they do. Put a bow on it As Data Professionals, we are always trying to find more efficient ways of doing the job. Sometimes, against our best advice, we are required to find a more efficient way to give users access to more than they probably should have. This article demonstrates one method to easily grant READ access to all databases while still keeping the environment secure and hitting that chord of having done it more efficiently. Interested in learning about some deep technical information? Check these out! Want to learn more about your indexes? Try this index maintenance article or this index size article. This is the fifth article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post How to Easily Grant Permissions to all Databases first appeared on SQL RNNR. Related Posts: Server-Level Roles - Back to Basics November 20, 2020 When Too Much is Not a Good Thing December 13, 2019 SQL Server User Already Exists - Back to Basics January 24, 2018 SHUTDOWN SQL Server December 3, 2018 Who needs data access? Not You! December 12, 2019 The post How to Easily Grant Permissions to all Databases appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 22932
Modal Close icon
Modal Close icon