Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-haproxy-2-0-released-with-kubernetes-ingress-controller-layer-7-retries-polyglot-extensibility-grpc-support-and-more
Vincy Davis
17 Jun 2019
6 min read
Save for later

HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more

Vincy Davis
17 Jun 2019
6 min read
Last week, HAProxy 2.0 was released with critical features of cloud-native and containerized environments. This is an LTS (Long-term support) release, which includes a powerful set of core features such as Layer 7 retries, Cloud-Native threading and logging, polyglot extensibility, gRPC support and more, and will improve the seamless support for integration into modern architectures. In conjunction with this release, the HAProxy team has also introduced the HAProxy Kubernetes Ingress Controller and the HAProxy Data Plane API. The founder of HAProxy Technologies, Willy Tarreau, has said that these developments will come with HAProxy 2.1 version. The HAProxy project has also opened up issue submissions on its HAProxy GitHub account. Some features of HAProxy 2.0 Cloud-Native Threading and Logging HAProxy can now scale to accommodate any environment with less manual configuration. This will enable the number of worker threads to match the machine’s number of available CPU cores. The process setting is no longer required, thus simplifying the bind line. Two new build parameters have been added: MAX_THREADS and MAX_PROCS, which avoids allocating huge structs. Logging has been made easier for containerized environments. Direct logging to stdout and stderr, or to a file descriptor is now possible. Kubernetes Ingress Controller The HAProxy Kubernetes Ingress Controller provides a high-performance ingress for the Kubernetes-hosted applications. It supports TLS offloading, Layer 7 routing, rate limiting, whitelisting. Ingresses can be configured through either ConfigMap resources or annotations. The Ingress Controller gives users the ability to : Use only one IP address and port and direct requests to the correct pod based on the Host header and request path Secure communication with built-in SSL termination Apply rate limits for clients while optionally whitelisting IP addresses Select from among any of HAProxy's load-balancing algorithms Get superior Layer 7 observability with the HAProxy Stats page and Prometheus metrics Set maximum connection limits to backend servers to prevent overloading services Layer 7 Retries With HAProxy 2.0, it will be possible to retry from another server at Layer 7 for failed HTTP requests. The new configuration directive, retry-on, can be used in defaults, listen, or backend section. The number of attempts at retrying can be specified using the retries directive. The full list of retry-on options is given on the HAProxy blog. HAProxy 2.0 also introduces a new http-request action called disable-l7-retry. It allows the user to disable any attempt to retry the request if it fails for any reason other than a connection failure. This can be useful to make sure that POST requests aren’t retried. Polyglot Extensibility The Stream Processing Offload Engine (SPOE) and Stream Processing Offload Protocol (SPOP) were introduced in HAProxy 1.7. It aimed to create the extension points necessary to build upon HAProxy using any programming language. From HAProxy 2.0, the following libraries and examples will be available in the following languages and platforms: C .NET Core Golang Lua Python gRPC HAProxy 2.0 delivers full support for the open-source RPC framework, gRPC. This allows bidirectional streaming of data, detection of gRPC messages, and logging gRPC traffic. Two new converters, protobuf and ungrpc, have been introduced, to extract the raw Protocol Buffer messages. Using Protocol Buffers, gRPC enables users to serialize messages into a binary format that’s compact and potentially more efficient than JSON. Users need to set up a standard end-to-end HTTP/2 configuration, to start using gRPC in HAProxy. HTTP Representation (HTX) The Native HTTP Representation (HTX) was introduced with HAProxy 1.9. Starting from 2.0, it will be enabled by default. HTX creates strongly typed, well-delineated header fields and allows for gaps and out-of-order fields. It also allows HAProxy to maintain consistent semantics from end-to-end and provides higher performance when translating HTTP/2 to HTTP/1.1 or vice versa. LTS Support for 1.9 Features HAProxy 2.0 bring LTS support for many features that were introduced or improved upon during the 1.9 release. Some are them are specified below: Small Object Cache with an increased caching size up to 2GB, set with the max-object-size directive. The total-max-size setting determines the total size of the cache and can be increased up to 4095MB. New fetches like date_us, cpu_calls and more have been included which will report either an internal state or from layer 4, 5, 6, and 7. New converters like strcmp, concat and more allow to transform data within HAProxy Server Queue Priority Control, lets the users to prioritize some queued connections over others. This is helpful to deliver JavaScript or CSS files before images. The resolvers section supports using resolv.conf by specifying parse-resolv-conf. The HAProxy team has planned to build HAProxy 2.1 with features like UDP Support, OpenTracing and Dynamic SSL Certificate Updates. The HAProxy inaugural community conference, HAProxyConf is scheduled to take place in Amsterdam, Netherlands on November 12-13, 2019. A user on Hacker News comments, “HAProxy is probably the best proxy server I had to deal with ever. It's performance is exceptional, it does not interfere with L7 data unless you tell it to and it's extremely straightforward to configure reading the manual.” While some are busy comparing HAProxy with the nginx web server. A user says that “In my previous company we used to use HAProxy, and it was a hassle. Yes, it is powerful. However, nginx is way easier to configure and set up, and performance wise is a contender for most usual applications people needed. nginx just fulfills most people's requirements for reverse proxy and has solid HTTP/2 support (and other features) for way longer.” Another user states that “Big difference is that haproxy did not used to support ssl without using something external like stunnel -- nginx basically did it all out of the box and I haven't had a need for haproxy in quite some time now.” While others suggest that HAProxy is trying hard to stay equipped with the latest features in this release. https://twitter.com/garthk/status/1140366975819849728 A user on Hacker News agrees by saying that “These days I think HAProxy and nginx have grown a lot closer together on capabilities.” Visit the HAProxy blog for more details about HAProxy 2.0. HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics MariaDB announces the release of MariaDB Enterprise Server 10.4 Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service
Read more
  • 0
  • 0
  • 21926

article-image-kubernetes-releases-etcd-v3-4-with-better-backend-storage-improved-raft-voting-process-new-raft-non-voting-member-and-more
Fatema Patrawala
02 Sep 2019
5 min read
Save for later

Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more

Fatema Patrawala
02 Sep 2019
5 min read
Last Friday, a team at Kubernetes announced the release of etcd 3.4 version. etcd 3.4 focuses on stability, performance and ease of operation. It includes features like pre-vote and non-voting member and improvements to storage backend and client balancer. Key features and improvements in etcd v3.4 Better backend storage etcd v3.4 includes a number of performance improvements for large scale Kubernetes workloads. In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. “read-only range request ... took too long to execute”). Previously, the storage backend commit operation on pending writes, blocks incoming read transactions, even when there was no pending write. Now, the commit does not block reads which improve long-running read transaction performance. The team has further made backend read transactions fully concurrent. Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. They also ran Kubernetes 5000-node scalability test on GCE with this change and observed similar improvements. Improved raft voting process etcd server implements Raft consensus algorithm for data replication. Raft is a leader-based protocol. Data is replicated from leader to follower; a follower forwards proposals to a leader, and the leader decides what to commit or not. Leader persists and replicates an entry, once it has been agreed by the quorum of cluster. The cluster members elect a single leader, and all other members become followers. The elected leader periodically sends heartbeats to its followers to maintain its leadership, and expects responses from each follower to keep track of its progress. In its simplest form, a Raft leader steps down to a follower when it receives a message with higher terms without any further cluster-wide health checks. This behavior can affect the overall cluster availability. For instance, a flaky (or rejoining) member drops in and out, and starts campaign. This member ends up with higher terms, ignores all incoming messages with lower terms, and sends out messages with higher terms. When the leader receives this message of a higher term, it reverts back to follower. This becomes more disruptive when there’s a network partition. Whenever the partitioned node regains its connectivity, it can possibly trigger the leader re-election. To address this issue, etcd Raft introduces a new node state pre-candidate with the pre-vote feature. The pre-candidate first asks other servers whether it’s up-to-date enough to get votes. Only if it can get votes from the majority, it increments its term and starts an election. This extra phase improves the robustness of leader election in general. And helps the leader remain stable as long as it maintains its connectivity with the quorum of its peers. Introducing a new raft non-voting member, “Learner” The challenge with membership reconfiguration is that it often leads to quorum size changes, which are prone to cluster unavailabilities. Even if it does not alter the quorum, clusters with membership change are more likely to experience other underlying problems. In order to address failure modes, etcd introduced a new node state “Learner”, which joins the cluster as a non-voting member until it catches up to leader’s logs. This means the learner still receives all updates from leader, while it does not count towards the quorum, which is used by the leader to evaluate peer activeness. The learner only serves as a standby node until promoted. This relaxed requirements for quorum provides the better availability during membership reconfiguration and operational safety. Improvements to client balancer failover logic etcd is designed to tolerate various system and network faults. By design, even if one node goes down, the cluster “appears” to be working normally, by providing one logical cluster view of multiple servers. But, this does not guarantee the liveness of the client. Thus, etcd client has implemented a different set of intricate protocols to guarantee its correctness and high availability under faulty conditions. Historically, etcd client balancer heavily relied on old gRPC interface: every gRPC dependency upgrade broke client behavior. A majority of development and debugging efforts were devoted to fixing those client behavior changes. As a result, its implementation has become overly complicated with bad assumptions on server connectivity. The primary goal in this release was to simplify balancer failover logic in etcd v3.4 client; instead of maintaining a list of unhealthy endpoints, whenever client gets disconnected from the current endpoint. To know more about this release, check out the Changelog page on GitHub. What’s new in cloud and networking this week? VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models Pivotal open sources kpack, a Kubernetes-native image build service
Read more
  • 0
  • 0
  • 21906

article-image-microsoft-acquires-ai-startup-lobe-a-no-code-visual-interface-tool-to-build-deep-learning-models-easily
Natasha Mathur
14 Sep 2018
4 min read
Save for later

Microsoft acquires AI startup Lobe, a no code visual interface tool to build deep learning models easily

Natasha Mathur
14 Sep 2018
4 min read
Microsoft announced yesterday that it has acquired Lobe, a small San Francisco based AI startup. Lobe is a visual Interface tool that allows people to easily create intelligent apps capable of understanding hand gestures, hear music, read handwriting, and more, without any coding involved. Lobe is aimed at making deep learning simple, understandable and accessible to everyone. With the Lobe’s simple visual interface, anyone can develop deep learning and AI models quickly, without having to write any code. A look at Lobe’s features Drag, drop, learn Lobe lets you build custom deep learning models, train them, and ship them directly in your app without any coding required. You can start by dragging in a folder of training examples from your desktop. This lets you build a custom deep learning model and begin its training. Once you’re done with this, you can export a trained model and ship it directly in your app. Connect together smart lobes There are smart building blocks called lobes in Lobe. These lobes can be connected together allowing you to quickly create custom deep learning models. For instance, you can connect the Hand & Face lobe to let you find the most prominent hand in the image. After this, connect the Detect Features lobe to find the important features in the hand. Finally, you can connect the Generate Labels lobe to predict the emoji in the image. You can also refine your model by adjusting each lobes unique settings or by editing any lobe’s sub-layers. Exploring dataset visually With Lobe, you can have your entire dataset displayed visually. This helps you browse and sort through all your examples. All you have to do is select any icon and see how that example performs in your model. Your dataset gets automatically split into a Lesson which teaches your model during training. There’s also a Test used that evaluates how your model will perform in the real world on examples that have never been seen before. Real-time training results Lobe comes with super fast cloud training that provides real-time results without slowing down your computer.  There are interactive charts which help you monitor the accuracy of your model and understand how the model improves over time. The best accuracy then automatically gets selected and saved. Advanced control over every layer Lobe is built on top of the deep learning frameworks TensorFlow and Keras. This allows you to control every layer of your model. With Lobe, you can tune hyperparameters, add layers, and design new architectures with the help of hundreds of advanced building block lobes. Ship it in your application After you’re done training your model, it can be exported to TensorFlow or CoreML which you can then run directly into your app. There’s also an easy-to-use Lobe Developer API, which lets you host your model in the cloud and integrate it into your app. What could Microsoft’s plans be with this acquisition? This is not the first AI startup acquired by Microsoft. Other than Lobe, Microsoft also acquired Bonsai.ai, a deep reinforcement learning platform, in July to build machine learning models for autonomous systems of all kinds. Similarly, Microsoft acquired Semantic Machines this May to build a conversational AI center of excellence in Berkeley to advance the state of conservational AI. “Over the last few months, we’ve made multiple investments in companies to further this (expanding its growth in AI) goal. These are just two recent examples of investments we have made to help us accelerate the current state of AI development”, says Kevin Scott, EVP, and CTO at Microsoft, in yesterday’s announcement on their official blog. Looks like Microsoft is all set on bringing more AI capabilities to its users. In fact, major tech firms around the world are walking along the same path and acquiring as many technology companies as they can. For instance, Amazon acquired AI cybersecurity startup Sqrrl, Facebook acquired Bloomsbury AI, and Intel acquired Vertex.ai earlier this year. “In many ways though, we’re only just beginning to tap into the full potential AI can provide. This in large part is because AI development and building deep learning models are slow and complex processes even for experienced data scientists and developers. To date, many people have been at a disadvantage when it comes to accessing AI, and we’re committed to changing that” writes Kevin. For more information, check out the official Microsoft Announcement. Say hello to IBM RXN, a free AI Tool in IBM Cloud for predicting chemical reactions Google’s new What-if tool to analyze Machine Learning models and assess fairness without any coding
Read more
  • 0
  • 0
  • 21902

article-image-introducing-kdevops-modern-devops-framework-for-linux-kernel-development
Fatema Patrawala
20 Aug 2019
3 min read
Save for later

Introducing kdevops, a modern DevOps framework for Linux kernel development

Fatema Patrawala
20 Aug 2019
3 min read
Last Friday, Luis Chamberlain announced the release of kdevops as a Linux kernel development DevOps framework. Chamberlain wrote in his email, “the goal behind this project is to provide a modern devops framework for Linux kernel development. It is not a test suite, it is designed to use any test suites, and more importantly, it allows us to let us easily set up test environments in a jiffie. It supports different virtualization environments, and different cloud environments, and supports different Operating Systems.” kdevops is a sample framework which lets you to easily set up a testing environment for a number of different use cases. How does kdevops work? kdevops relies on Vagrant, Terraform and Ansible to get you going with your virtualization/bare metal/cloud provisioning environment. It relies heavily on public ansible galaxy roles and terraform modules. This lets the kdevops team share codes with the community and allow them to use the project as a demo framework which uses theses ansible roles and terraform modules. There are three parts to the long terms ideals for kdevops: Provisioning required virtual hosts/cloud environment Provisioning your requirements Running whatever you want Ansible will be used to get all the required ansible roles. Then Vagrant or Terraform can be used to provision hosts. Vagrant makes use of two ansible roles to setup update ~/.ssh/config and update the systems with basic development preference files, things like .git config or bashrc hacks. This last part is handled by the devconfig ansible role. Since ~/.ssh/config is updated you can then run further ansible roles manually when using Vagrant. If using Terraform for cloud environments, it updates ~/.ssh/config directly without ansible, however since access to hosts on cloud environments can vary in time running all ansible roles is expected to be done manually. What you can do with kdevops Full vagrant provisioning, including updating your ~/.ssh/config Terraform provisioning on different cloud providers Running ansible to install dependencies on debian Using ansible to clone, compile and boot into any random kernel git tree with a supplied config Updating ~/.ssh/config for terraform, first tested with the OpenStack provider, with both generic and special minicloud support. Other terraform providers just require making use of the newly published terraform module add-host-ssh-config On Hacker News, this release has gained positive reviews, but the only concern for users is if it has anything to do with devops as it appears to be an automated testing environment provision. One of them comments, “This looks cool, but I'm not sure what it has to do with devops? It just seems to be automated test environment provisioning, am I missing something?” On Reddit as well, Linux users are happy with this setup and they find it really promising, one of the comments read, “I have so much hacky scriptwork around kvm, have always been looking for a cleaner setup; this looks super promising. thank you.” To know more about this release, check out the official announcement page as well as the GitHub page. Why do IT teams need to transition from DevOps to DevSecOps? Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images
Read more
  • 0
  • 0
  • 21866

article-image-ghost-3-0-an-open-source-headless-node-js-cms-released-with-jamstack-integration-github-actions-and-more
Savia Lobo
23 Oct 2019
4 min read
Save for later

Ghost 3.0, an open-source headless Node.js CMS, released with JAMStack integration, GitHub Actions, and more!

Savia Lobo
23 Oct 2019
4 min read
Yesterday, the team behind Ghost, an open-source headless Node.js CMS, announced its major version, Ghost 3.0. The new version represents “a total of more than 15,000 commits across almost 300 releases”  Ghost is now used by the likes of Apple, DuckDuckGo, OpenAI, The Stanford Review, Mozilla, Cloudflare, Digital Ocean, and many, others. “To date, Ghost has made $5,000,000 in customer revenue whilst maintaining complete independence and giving away 0% of the business,” the official website highlights. https://twitter.com/Ghost/status/1186613938697338881 What’s new in Ghost 3.0? Ghost on the JAMStack The team has revamped the traditional architecture using JAMStack which makes Ghost a completely decoupled headless CMS. In this way, users can generate a static site and later add dynamic features to make it powerful. The new architecture unlocks content management that is fundamentally built via APIs, webhooks, and frameworks to generate robust modern websites. Continuous theme deployments with GitHub Actions The process of manually making a zip, navigating to Ghost Admin, and uploading an update in the browser can be difficult at times. To deploy Ghost themes to production in a better way, the team decided to combine with Github Actions. This makes it easy to continuously sync custom Ghost themes to live production sites with every new commit. New WordPress migration plugin Earlier versions of Ghost included a very basic WordPress migrator plugin that made it extremely difficult for anyone to move their data between the platforms or have a smooth experience. The new Ghost 3.0 compatible WordPress migration plugin provides a single-button-download of the full WordPress content + image archive in a format that can be dragged and dropped into Ghost's importer. Those who are new and want to explore Ghost 3.0 can create a new site in a few clicks with an unrestricted 14 day free trial as all new sites on Ghost (Pro) are running Ghost 3.0. The team expects users to try out Ghost 3.0 and get back with feedback for the team on the Ghost forum or help out on Github for building the next features with the Ghost team. Ben Thompson’s Stratechery, a subscription-based newsletter featuring in-depth commentary on tech and media news, recently posted an interview with Ghost CEO John O’Nolan. This interview features questions on what Ghost is, where it came from, and much more. Ghost 3.0 has received a positive response from many and also the fact that it is moving towards adopting static site JAMStack approach. A user on Hacker News commented, “In my experience, Ghost has been the no-nonsense blog CMS that has been stable and just worked with very little maintenance. I like that they are now moving towards static site JAMStack approach, driven by APIs rather than the current SSR model. This lets anybody to customise their themes with the language / framework of choice and generating static builds that can be cached for improved loading times.” Another user who is using Ghost for the first time commented, “I've never tried Ghost, although their website always appealed to me (one of the best designed website I know). I've been using WordPress for the past 13 years, for personal and also professional projects, which means the familiarity I've built with building custom themes never drew me towards trying another CMS. But going through this blog post announcement, I saw that Ghost can be used as a headless CMS with frontend frameworks. And since I started using GatsbyJS extensively in the past year, it seems like something that would work _really_ well together. Gonna try it out! And congrats on remaining true to your initial philosophy.” To know more about the other features in detail, read the official blog post. Google partners with WordPress and invests $1.2 million on “an opinionated CMS” called Newspack Verizon sells Tumblr to WordPress parent, Automattic, for allegedly less than $3million, a fraction of its acquisition cost FaunaDB brings its serverless database to Netlify to help developers create apps
Read more
  • 0
  • 0
  • 21866

article-image-react-native-0-61-introduces-fast-refresh-for-reliable-hot-reloading
Bhagyashree R
25 Sep 2019
2 min read
Save for later

React Native 0.61 introduces Fast Refresh for reliable hot reloading

Bhagyashree R
25 Sep 2019
2 min read
Last week, the React team announced the release of React Native 0.61. This release comes with an overhauled reloading feature called Fast Refresh, a new hook named ‘useWindowDimensions’, and more. https://twitter.com/dan_abramov/status/1176597851822010375 Key updates in React Native 0.61 Fast Refresh for reliable hot reloading In December last year, the React Native team asked developers what they dislike about React Native. Developers listed the problems they face when creating a React Native application including clunky debugging, improved open-source contribution process, and more. Hot reloading refreshes the updated files without losing the app state. Previously, it did not work reliably with function components, often failed to update the screen, and wasn’t resilient to typos and mistakes, which was one of the major pain points. To address this issue, React Native 0.61 introduces Fast Refresh, which is a combination of live reloading with hot reloading. Dan Abramov, a core React Native developer, wrote in the announcement, “In React Native 0.61, we’re unifying the existing “live reloading” (reload on save) and “hot reloading” features into a single new feature called “Fast Refresh”.” Fast Refresh fully supports function components, hooks, recovers gracefully after typos and mistakes, and does not perform invasive code transformations. It is enabled by default, however, you can turn it off in the Dev Menu. The useWindowDimensions hook React Native 0.61 comes with a new hook called useWindowDimensions, which can be used as an alternative to the Dimensions API in most cases. This will automatically provide and subscribe to window dimension updates. Read also: React Conf 2018 highlights: Hooks, Concurrent React, and more Improved CocoaPods compatibility support is fixed In React Native 0.60, CocoaPods was integrated by default, which ended up breaking builds that used the use_frameworks! attribute. In React Native 0.61, this issue is fixed by making some updates in podspec, which describes a version of a Pod library. Read also: React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial] Check out the official announcement to know more about React Native 0.61. 5 pitfalls of React Hooks you should avoid – Kent C. Dodds #Reactgate forces React leaders to confront community’s toxic culture head on Ionic React RC is now out! React Native VS Xamarin: Which is the better cross-platform mobile development framework? React Native community announce March updates, post sharing the roadmap for Q4
Read more
  • 0
  • 0
  • 21861
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-time-to-set-service-dependencies-for-sql-server-its-easy-from-blog-posts-sqlservercentral
Anonymous
29 Dec 2020
6 min read
Save for later

Time to Set Service Dependencies for SQL Server, it’s Easy from Blog Posts - SQLServerCentral

Anonymous
29 Dec 2020
6 min read
In the previous article I mentioned the need for setting certain dependencies for SQL Services startup. As promised, this article is a follow-up to that article to help you easily set the service dependencies for SQL Server. Setting service startup dependencies is an essential step to take to help ensure a seamless startup experience and to reduce the chance for failure. Some of the possible failures that could occur were explained in the previous article as well as in this article about MSAs by Wayne Sheffield. Our goal as data professionals is to minimize the chance for surprises and unnecessary time spent troubleshooting problems that shouldn’t have happened in the first place. Set Service Dependencies What is it that a service dependency does for the system? Well, a service dependency is much like any sort of dependency. A service dependency simply means that in order for a service to function properly another service needs to be functioning properly. This is very much like having children. The children are called dependents because children require somebody else to be around to take care of and support them to a certain point. A service that has a dependency means that it basically is a child service that needs a parent service to be properly functioning so the child service can go on about its duties and do what is expected / desired of it. So what are the service dependencies that we should be setting? The services that should be running in order to ensure SQL Server will work properly are Netlogon, W32Time, and KEYISO. For the SQL Agent service, the same services can be set as dependencies but you really only need to ensure that the SQL Server service is listed as a service dependency. Here is an example of what that would look like from the service properties pages in the services control panel. Now, you can either laboriously enter each of those dependencies while editing the registry (ok, so it isn’t really that laborious to do it by hand via regedit but that does more easily permit unwanted errors to occur) or you can take advantage of something that is repeatable and easier to run. A script comes to mind as an appropriate method for that latter option. Script it once! Scripts are awesome resources to make our lives easier. This script is one that I use time and again to quickly set all of these service dependencies. In addition, it also can set the properties for your MSA account. One thing it does not do is set the service to “Automatic (Delayed Start)” instead of the default “Automatic” start type. That sounds like a fantastic opportunity for you to provide feedback on how you would that into the script. Without further ado, here is the script to help save time and set your service dependencies easily. #Todo - modify so can be run against group of servers # modify so can be run remotely $servicein = '' #'MSSQL$DIXNEUFLATIN1' #use single quotes in event service name has a $ like sql named instances $svcaccntname = '' #'svcmg_saecrm01$' #to set managed service account properties #$RequiredServices = @("W32Time","Netlogon","KEYISO"); $RequiredServices = @('W32Time','Netlogon','KEYISO'); #$CurrentServices; IF($servicein){ $ServiceList = [ordered]@{ Name = $servicein} } IF($svcaccntname) { $ServiceList = Get-WmiObject Win32_Service | Select Name, StartName, DisplayName | Where-Object {($_.Name -match 'MSSQL' -or $_.Name -match 'Agent' -or $_.Name -match 'ReportServer') ` -and $_.DisplayName -match 'SQL SERVER' ` -or $_.StartName -like "*$svcaccntname*" } } ELSE{ $ServiceList = Get-WmiObject Win32_Service | Select Name, StartName, DisplayName | Where-Object {($_.Name -match 'MSSQL' -or $_.Name -match 'Agent' -or $_.Name -match 'ReportServer') ` -and $_.DisplayName -match 'SQL SERVER' ` } } foreach ($service in $ServiceList) { $servicename = $service.Name #$RequiredServices = @("W32Time","Netlogon","KEYISO"); #init at top $CurrentReqServices = @(Get-Service -Name $servicename -RequiredServices | Select Name ); #<# if ($CurrentReqServices) { $CurrentReqServices | get-member -MemberType NoteProperty | ForEach-Object { $ReqName = $_.Name; $ReqValue = $CurrentReqServices."$($_.Name)" } "Current Dependencies = $($ReqValue)"; #> } ELSE { "Current Dependencies Do NOT exist!"; $ReqValue = $RequiredServices } $CurrentServices = $RequiredServices + $ReqValue | SELECT -Unique; #"Processing Service: $servicename" #"Combined Dependencies = $($CurrentServices)"; #<# $dependencies = get-itemproperty -path "HKLM:SYSTEMCurrentControlSetServices$servicename" -Name DependOnService -ErrorAction SilentlyContinue if ($servicename -match 'MSSQL'){ if ($dependencies) { #$dependencies.DependOnService Set-ItemProperty -Path "HKLM:SYSTEMCurrentControlSetServices$servicename" -Name DependOnService -Value $CurrentServices } ELSE { New-ItemProperty -Path "HKLM:SYSTEMCurrentControlSetServices$servicename" -Name DependOnService -PropertyType MultiString -Value $CurrentServices } } IF($svcaccntname) { $mgdservice = get-itemproperty -path "HKLM:SYSTEMCurrentControlSetServices$servicename" -Name ServiceAccountManaged -ErrorAction SilentlyContinue if ($mgdservice) { Set-ItemProperty -Path "HKLM:SYSTEMCurrentControlSetServices$servicename" -Name ServiceAccountManaged -Value @("01","00","00","00") } ELSE { New-ItemProperty -Path "HKLM:SYSTEMCurrentControlSetServices$servicename" -Name ServiceAccountManaged -PropertyType BINARY -Value @("01","00","00","00") } } #> } Mandatory disclaimer: Do not run code you find on the internet in your production environment without testing it first. Do not use this code if your vision becomes blurred. Seek medical attention if this code runs longer than four hours. Common side effects include but not limited to: Diarrhea, Infertility, Dizziness, Shortness of breath, Impotence, Drowsiness, Fatigue, Heart issues (palpitations, irregular heartbeats), Hives, Nausea and vomiting, Rash, Imposter Syndrome, FOMO, and seasonal Depression. Script creator and site owner take no responsibility or liability for scripts executed. Put a bow on it DBAs frequently have tasks that must be done in a repeatable fashion. One of those repeatable tasks should be the task to ensure the Service Dependencies are properly set. This article shares a script that achieves the goal of creating a routine that is repeatable and easy to take some of that weight off the shoulders of the DBA. The script provided in this article is an easy means to help ensure consistency and repeatability in tasks that may have to be repeated many times. Doing these tasks with a script is mundane and monotonous enough. Imagine doing that by hand, manually, on hundreds of servers – or even just two servers. Then to try to do it again in 6 months on another server – after you have forgotten what you did manually the first two times. Interested in little more about security? Check these out! Want to learn more about your indexes? Try this index maintenance article or this index size article. This is the fourth article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post Time to Set Service Dependencies for SQL Server, it’s Easy first appeared on SQL RNNR. Related Posts: Here is an Easy Fix for SQL Service Startup Issues… December 28, 2020 CRM Data Source Connection Error January 23, 2020 SHUTDOWN SQL Server December 3, 2018 Single User Mode - Back to Basics May 31, 2018 Changing Default Logs Directory - Back to Basics January 4, 2018 The post Time to Set Service Dependencies for SQL Server, it’s Easy appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 21849

article-image-graphql-api-is-now-generally-available
Amrata Joshi
17 Jul 2019
3 min read
Save for later

GraphQL API is now generally available

Amrata Joshi
17 Jul 2019
3 min read
Last month, the team at Fauna, provider of FaunaDB, the cloud-first database announced the general availability of its GraphQL API, a query language for APIs. With the support for GraphQL, FaunaDB now provides cloud database services in the market and allows developers to use any API of choice to manipulate all their data. GraphQL also helps developers with their productivity by enabling fast, easy development of serverless applications. It makes FaunaDB the only serverless backend that has support for universal database access. Matt Biilmann, CEO at Netlify, a Fauna partner said, “Fauna’s GraphQL support is being introduced at a perfect time as rich, serverless apps are disrupting traditional development models.” Biilmann added, “GraphQL is becoming increasingly important to the entire developer community as they continue to leverage JAMstack and serverless to simplify cloud application development. We applaud Fauna’s work as the first company to bring a serverless GraphQL database to market.” GraphQL helps developers in specifying the shape of the data they need without requiring changes to the backend components that provide data. The GraphQL API in FaunaDB helps teams in collaborating smoothly and allows back-end teams to focus on security and business logic, and helps front-end teams to concentrate on presentation and usability.  In 2017, the global serverless architecture market was valued at $3.46 billion in 2017 and is expected to reach $18.04 billion by 2024 as per the Zion Research. GraphQL brings growth and development to serverless development so developers can look for back-end GraphQL support like the one found in FaunaDB. The GraphQL API also supports three general functions: Queries, Mutations, and Subscriptions and currently, FaunaDB natively supports Queries and Mutations.  FaunaDB's GraphQL API provides developers with uniform access to transactional consistency, quality of service (QoS), user authorization, data access, and temporal storage. No limits on data history FaunaDB is the only database that provides support without any limits on data history. Any API such as SQL in FaunaDB can return data at any given time. Consistency FaunaDB provides the highest consistency levels for its transactions that are automatically applied to all APIs. Authorization FaunaDB provides access control at the row level which is applicable to all APIs, be it GraphQL or SQL. Shared data access It also features shared data access, so the data which is written by one API (e.g., GraphQL) can be read and modified by another API such as FQL.  To know more about the news, check out the press release. 7 reasons to choose GraphQL APIs over REST for building your APIs Best practices for RESTful web services : Naming conventions and API Versioning [Tutorial] Implementing routing with React Router and GraphQL [Tutorial]
Read more
  • 0
  • 0
  • 21844

article-image-aws-announces-more-flexibility-its-certification-exams-drops-its-exam-prerequisites
Melisha Dsouza
18 Oct 2018
2 min read
Save for later

AWS announces more flexibility its Certification Exams, drops its exam prerequisites

Melisha Dsouza
18 Oct 2018
2 min read
Last week (on 11th October), the AWS team announced that they are removing the exam-prerequisites to give users more flexibility on the AWS Certification Program. Previously, it was a prerequisite for a customer to pass the foundational or Associate level exam before appearing for the Professional or Specialty certification. AWS has now eliminated this prerequisite, taking into account customers requests for flexibility. Customers are no longer required to have an Associate certification before pursuing a Professional certification. Nor do they need to hold a Foundational or Associate certification before pursuing Specialty certification. The professional level exams are pretty tough to pass. Until a customer has a complete deep knowledge of the AWS platform, passing the professional exam is difficult. If a customer skips the Foundational or Associate level exams and directly appears for the professional level exams, he will not have the practice and knowledge necessary to fare well in them. Instead, if he/she fails the exam, backing up to the Associate level can be demotivating. The AWS Certification demonstrates helps individuals obtain an expertise to design, deploy, and operate highly available, cost-effective, and secure applications on AWS. They will gain a  proficiency with AWS which will help them earn tangible benefits This exam will help Employers Identify skilled professionals that can use  AWS technologies to lead IT initiatives. Moreover, the exams will help them reduce risks and costs to implement their workloads and projects on the AWS platform. AWS dominates the cloud computing market and the AWS Certified Solutions Architect exams can help candidates secure their career in this exciting field. AWS offers digital and classroom training build cloud skills and prepare for certification exams. To know more about this announcement, head over to their official Blog. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence AWS machine learning: Learning AWS CLI to execute a simple Amazon ML workflow [Tutorial]  
Read more
  • 0
  • 0
  • 21843

article-image-ibms-deeplocker-the-artificial-intelligence-powered-sneaky-new-breed-of-malware
Melisha Dsouza
13 Aug 2018
4 min read
Save for later

IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware

Melisha Dsouza
13 Aug 2018
4 min read
In the new found age of Artificial Intelligence, where everything and everyone uses Machine Learning concepts to make life easier, the dark side of the same is can be left unexplored. Cybersecurity is gaining a lot of attention these days.The most influential organizations have experienced a downfall because of undetected malware that have managed to evade even the most secure cyber defense mechanisms. The job just got easier for cyber criminals that exploit AI to empower them and launch attacks. Imagine combining AI with cyber attacks! At last week’s Black Hat USA 2018 conference, IBM researchers presented their newly developed malware “DeepLocker” that is backed up by AI. Weaponized AI seems here to stay. Read Also: Black Hat USA 2018 conference Highlights for cybersecurity professionals All you need to know about DeepLocker Simply put, DeepLocker is a new generation malware which can stealth under the radar and go undetected till its target is reached. It uses an Artificial Intelligence model to identify its target using indicators like facial recognition, geolocation and voice recognition. All of which is easily available on the web these days! What’s interesting is that the malware can hide its malicious payload in carrier applications- like a video conferencing software, and go undetected by most antivirus and malware scanners until it reaches specific victims. Imagine sitting on your computer performing daily tasks. Considering that your profile pictures are available on the internet, your video camera can be manipulated to find a match to your online picture. Once the target (your face) is identified, the malicious payload can be unleashed thanks to your face which serves as a key to unlock the virus. This simple  “trigger condition” to unlock the attack is almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model. The simple logic of  “if this, then that” trigger condition used by DeepLocker is transformed into a deep convolutional network of the AI model.   DeepLocker – AI-Powered Concealment   Source: SecurityIntelligence   The DeepLocker makes it really difficult for malware analysts to answer the 3 main questions- What target is the malware after-  Is it after people’s faces or some other visual clues? What specific instance of the target class is the valid trigger condition? And what is the ultimate goal of the attack payload? Now that’s some commendable work done by the IBM researchers. IBM has always strived to make a mark in the field of innovation. DeepLocker comes as no surprise as IBM has the highest number of facial recognition patents granted in 2018. BlackHat USA 2018 sneak preview The main aim of the IBM Researchers- Marc Ph. Stoecklin, Jiyong Jang and Dhilung Kirat-  briefing the crowd in the BlackHat USA 2018 conference was, To raise awareness that AI-powered threats like DeepLocker can be expected very soon To demonstrate how attackers have the capability to build stealthy malware that can circumvent defenses commonly deployed today and To provide insights into how to reduce risks and deploy adequate countermeasures. To demonstrate the efficiency of DeepLocker’s capabilities, they designed and demonstrated a proof of concept. The WannaCry virus was camouflaged in a benign video conferencing application so that it remains undetected by antivirus engines and malware sandboxes. As a triggering condition, an individual was selected, and the AI was trained to launch the malware when certain conditions- including the facial recognition of the target- were met. The experiment was, undoubtedly, a success. The DeepLocker is just an experiment by IBM to show how open-source AI tools can be combined with straightforward evasion techniques to build a targeted, evasive and highly effective malware. As the world of cybersecurity is constantly evolving, security professionals will now have to up their game to combat hybrid malware attacks. Found this article Interesting? Read the Security Intelligence blog to discover more. 7 Black Hat USA 2018 conference cybersecurity training highlights 12 common malware types you should know Social engineering attacks – things to watch out for while online  
Read more
  • 0
  • 0
  • 21842
article-image-the-haiku-operating-system-has-released-r1-beta1
Melisha Dsouza
01 Oct 2018
6 min read
Save for later

The Haiku operating system has released R1/beta1

Melisha Dsouza
01 Oct 2018
6 min read
As promised by the Haiku team earlier this month, Haiku R1 now stands released in its beta version! After the big gap between Haikus’ latest release in November 2012, users can expect a lot more upgrades in the R1/beta1. The Haiku OS is known for its ease of use, responsiveness, and overall coherence. With improvements to its package manager, WebPositive, media subsystem and much more Haiku has made the wait worth its while! Let’s dive into some of the major upgrades of this release. #1  Package management The biggest upgrade in the R1 Beta is the addition of a complete package management system. Finalized and merged during 2013, Haiku packages are a special type of compressed filesystem image. These are ‘mounted’ upon installation and thereafter on each boot by the packagefs. It is worth noting that since packages are merely "activated", not installed, the bootloader has been given some capacity to affect them. Users can boot into a previous package state -in case they took a bad update- or even blacklist individual files. Installations and uninstallations of packages are practically instant. Users can manage the installed package set on a non-running Haiku system by mounting its boot disk and then manipulating the /system/packages directory and associated configuration files. The Haiku team has also introduced pkgman, the command-line interface to the package management system. Unlike most other package managers where packages can be installed only by name, Haiku packages can also be searched for and installed by provides, e.g. pkgman install cmd:rsync or pkgman install devel:libsdl2, which will locate the most relevant package that provides that, and install it. Accompanying the package manager is a massively revamped HaikuPorts, containing a wide array of both native and ported software for Haiku. #2 WebPositive upgrades The team has made the system web browser much more stable than before. Glitches with YouTube now stand fixed. While working on WebKit, the team also managed to fix a large number of bugs in Haiku itself - such as broken stack alignment, various kernel panics in the network stack, bad edge-case handling in app_server’s rendering core GCC upgrades and many more. HaikuWebKit now supports Gopher, which is its own network protocol layer. #3 Completely rewritten network preflet The newly rewritten network preflet, is designed for ease of use and longevity. In addition to the interface configuration screens, the preflet is also now able to manage the network services on the machine, such as OpenSSH and ftpd. It uses a plugin-based API, which helps third-party network services like VPNs, web servers, etc to integrate with it. #4 User interface cleanup & live color updates Mail and Tracker now sport Haiku-style toolbars and font-size awareness, among other applications. This will enable users to add proper DPI scaling and right-to-left layouts. Instead of requesting a specific system color and then manipulating it, most applications now instruct their controls to adopt certain colors based on the system color set directly. #5 Media subsystem improvements The Haiku team has made cleanups to the Media Kit to improve fault tolerance, latency correction, and performance issues. This will help with the Kit’s overall resilience. HTTP and RTSP streaming support integrated into the I/O layer of the Media Kit. Livestreams can now be played in WebPositive via HTML5 audio/video support, or in the native MediaPlayer. Significant improvements to the FFmpeg decoder plugin were made. Rather than the ancient FFmpeg 0.10, the last version that GCC2 can compile, FFmpeg 4.0 is now used all-around for a better support of both  audio and video formats, as well as significant performance improvements. The driver for HDA saw a good number of cleanups and wider audio support since the previous release. The DVB tuner subsystem saw a substantial amount of rework and the APE reader was also cleaned up and added to the default builds. #6 RemoteDesktop Haiku’s native RemoteDesktop application was improved and added to the builds. The RemoteDesktop forwards drawing commands from the host system to the client system, which for most applications consumes significantly lower bandwith. RemoteDesktop  can connect and run applications on any Haiku system that users have SSH access to, there is no need for a remote server. #7 New thread scheduler Haiku’s kernel thread scheduler is now O(1) (constant time) with respect to threads, and O(log N)(logarithmic time) with respect to processor cores. The new limit is 64 cores, this being an arbitrary constant that can be increased at any time. There are new implementations of the memcpy and memset primitives for x86 which constitute significant increases to their performance. #8 Updated Ethernet & WiFi drivers The ethernet & WiFi drivers, have been upgraded to those from FreeBSD 11.1. This brings in support for Intel’s newer “Dual Band” family, some of Realtek’s PCI chipsets, and newer-model chipsets in all other existing drivers. Additionally, the FreeBSD compatibility layer now interfaces with Haiku’s support for MSI-X interrupts, meaning that WiFi and ethernet drivers will take advantage of it wherever possible, leading to significant improvements in latency and throughput. #9 Updated file system drivers The NFSv4 client, was finally merged into Haiku itself, and is included by default. Additionally, Haiku’s userlandfs, which supports running filesystem drivers in userland, is now shipped along with Haiku itself. It supports running BeOS filesystem drivers, Haiku filesystem drivers, and provides FUSE compatibility. As a result, various FUSE-based filesystem drivers are now available in the ports tree, including FuseSMB, among others. Apart from the above mentioned features, users can look forward to EFI bootloader and GPT support, a build-in debugger, general system stabilization and much more! Reddit also saw comments from users waiting eagerly for this release: Source: Reddit  Source: Reddit After a long span of 17 years  from its day of launch, it would be interesting to see how this upgrade is received by the masses. To know more about Haiku R1, head over to their official site Sugar operating system: A new OS to enhance GPU acceleration security in web apps cstar: Spotify’s Cassandra orchestration tool is now open source! OpenSSL 1.1.1 released with support for TLS 1.3, improved side channel security  
Read more
  • 0
  • 0
  • 21839

article-image-typescript-2-9-release-candidate-is-here
Sugandha Lahoti
21 May 2018
2 min read
Save for later

Typescript 2.9 release candidate is here

Sugandha Lahoti
21 May 2018
2 min read
The release candidate for Typescript 2.9 is here! Typescript is an open-source programming language which adds optional static typing to Javascript. Let’s jump into some highlights of Typescript 2.9 RC. Changes to keyof operator TypeScript 2.9 changes the behavior of keyof to factor in both unique symbols as well as numeric literal types. TypeScript’s keyof operator is a useful way to query the property names of an existing type. Before Typescript 2.9, keyof never recognized symbolic keys. With this functionality, mapped object types like Partial, Required, or Readonly can also recognize symbolic and numeric property keys, and no longer drop properties named by symbols. Introduction of new import( ) type syntax One long-running pain-point in TypeScript has been the inability to reference a type in another module, or the type of the module itself, without including an import at the top of the file. With Typescript 2.9, their is a new import(...) type syntax. import types use the same syntax as ECMAScript’s proposed import(...) expressions, and provide a convenient way to reference the type of a module, or the types which a module contains. Trailing commas not allowed on rest parameters This break was added for conformance with ECMAScript, as trailing commas are not allowed to follow rest parameters in the specification. Changes to strictNullChecks Unconstrained type parameters are no longer assignable to object in strictNullChecks. Since generic type parameters can be substituted with any primitive type, this is a precaution TypeScript has added under strictNullChecks. To fix this, you can add a constraint on object. never can no longer be iterated over Values of type never can no longer be iterated over. Users can avoid this behavior by using a type assertion to cast to the type any. The list of entire changes and code files can be found on the Microsoft blog. You can also view the TypeScript roadmap for everything else that’s coming in 2.9 and beyond. How to install and configure TypeScript How to work with classes in Typescript Tools in TypeScript
Read more
  • 0
  • 0
  • 21815

article-image-7-black-hat-usa-2018-conference-cybersecurity-training-highlights-hardware-attacks-io-campaigns-threat-hunting-fuzzing-and-more
Melisha Dsouza
11 Aug 2018
7 min read
Save for later

7 Black Hat USA 2018 conference cybersecurity training highlights: Hardware attacks, IO campaigns, Threat Hunting, Fuzzing, and more

Melisha Dsouza
11 Aug 2018
7 min read
The 21st International Conference of Black Hat USA 2018, has just concluded. It took place from August 4, 2018 – August 9, 2018 in Las Vegas, Nevada. It is one of the most anticipated conferences of the year for security practitioners, executives, business developers and anyone who is a cybersecurity fanatic and wants to expand their horizon into the world of security. Black Hat USA 2018 opened with four days of technical training followed by the two-day main conference featuring Briefings, Arsenal, Business Hall, and more. The conference covered exclusive training modules that provided a hands-on offensive and defensive skill set building opportunity for security professionals. The Briefings covered the nitty-gritties of all the latest trends in information security. The Business Hall included a network of more than 17,000 InfoSec professionals who evaluated a range of security products offered by Black Hat sponsors. Best cybersecurity Trainings  in the conference: For more than 20 years, Black Hat has been providing its attendees with trainings that stand the test of time and prove to be an asset in penetration testing. The training modules designed exclusively for Black Hat attendees are taken by industry and subject matter experts from all over the world with the goal of shaping the information security landscape. Here’s a look at a few from this year’s conference. #1 Applied Hardware attacks: Embedded and IOT systems This hands-on training was headed by Josh Datko, and Joe Fitzpatrick that: Introduced students to the common interfaces on embedded MIPS and ARM systems Taught them how to exploit physical access to grant themselves software privilege. Focussed on UART, JTAG, and SPI interfaces. Students were given a brief architectural overview. 70% hands-on labs- identifying, observing, interacting, and eventually exploiting each interface. Basic analysis and manipulation of firmware images were also covered. This two-day course was geared toward pen testers, red teamers, exploit developers, and product developers who wished to learn how to take advantage of physical access to systems to assist and enable other attacks. This course also aimed to show security researchers and enthusiasts- who are unwilling to 'just trust the hardware'- to gain deeper insight into how hardware works and can be undermined. #2 Information Operations: Influence, exploit, and counter This fast-moving class included hands-on exercises to apply and reinforce the skills learned during the course of the training. It also included a best IO campaign contest which was conducted live during the class. Trainers David Raymond and Gregory Conti covered information operations theory and practice in depth. Some of the main topics covered were IO Strategies and Tactics, Countering Information Operations and Operations Security and Counter Intelligence. Users learned about Online Personas and explored the use of bots and AI to scale attacks and defenses. Other topics included understanding performance and assessment metrics, how to respond to an IO incident, exploring the concepts of Deception and counter-deception, and Cyber-enabled IO. #3 Practical Vulnerability discovery with fuzzing: Abdul Aziz Hariri and Brian Gorenc trained students on techniques to quickly identify common patterns in specifications that produce vulnerable conditions in the network. The course covered the following- Learning the process to build a successful fuzzer, and highlight public fuzzing frameworks that produce quality results. “Real world" case studies that demonstrated the fundamentals being introduced. Leverage existing fuzzing frameworks, develop their own test harnesses, integrate publicly available data generation engines and automate the analysis of crashing test cases. This class was aimed at individuals wanting to learn the fundamentals of the fuzzing process, develop advanced fuzzing frameworks, and/or improve their bug finding capabilities. #4 Active Directory Attacks for Red and Blue teams: Nikhil Mittal’s main aim to conduct the training was to change how you test an Active Directory Environment. To secure Active Directory, it is important to understand different techniques and attacks used by adversaries against it. The AD environments lack the ability to tackle latest threats. Hence, this training was aimed towards attacking modern AD Environment using built-in tools like PowerShell and other trusted OS resources. The training was based on real-world penetration tests and Red Team engagements for highly secured environments. Some of the techniques used in the course were- Extensive AD Enumeration Active Directory trust mapping and abuse. Privilege Escalation (User Hunting, Delegation issues and more) Kerberos Attacks and Defense (Golden, Silver ticket, Kerberoast and more) Abusing cross-forest trust (Lateral movement across forest, PrivEsc and more) Attacking Azure integration and components Abusing SQL Server trust in AD (Command Execution, trust abuse, lateral movement) Credentials Replay Attacks (Over-PTH, Token Replay etc.) Persistence (WMI, GPO, ACLs and more) Defenses (JEA, PAW, LAPS, Deception, App Whitelisting, Advanced Threat Analytics etc.) Bypassing defenses Attendees also acquired a free one month access to an Active Directory environment. This comprised of multiple domains and forests, during and after the training. #5 Hands-on Power Analysis and Glitching with ChipWhisperer This course was suited for anyone dealing with embedded systems who needed to understand the threats that can be used to break even a "perfectly secure" system. Side-Channel Power Analysis can be used to read out an AES-128 key in less than 60 seconds from a standard implementation on a small microcontroller. Colin O'Flynn helped the students understand whether their systems were vulnerable to such an attack or not. The course was loaded with hands-on examples to teach them about attacks and theories. The course included a ChipWhisperer-Lite, that students could walk away with the hardware provided during the lab sessions. During the two-day course, topics covered included : Theory behind side-channel power analysis, Measuring power in existing systems, Setting up the ChipWhisperer hardware & software, Several demonstrated attacks, Understanding and demonstration glitch attacks, and Analyzing your own hardware #6 Threat Hunting with attacker TTPs A proper Threat Hunting program focused on maximizing the effectiveness of scarce network defense resources to protect against a potentially limitless threat was the main aim of this class. Threat Hunting takes a different perspective on performing network defense, relying on skilled operators to investigate and find the presence of malicious activity. This training used standard network defense and incident response (which target flagging known malware). It focussed on abnormal behaviors and the use of attacker Tactics, Techniques, and Procedures (TTPs). Trainers Jared Atkinson, Robby Winchester and Roberto Rodriquez taught students on how to create threat hunting hypotheses based on attacker TTPs to perform threat hunting operations and detect attacker activity. In addition, they used free and open source data collection and analysis tools (Sysmon, ELK and Automated Collection and Enrichment Platform) to gather and analyze large amounts of host information to detect malicious activity. They used these techniques and toolsets to create threat hunting hypotheses and perform threat hunting in a simulated enterprise network undergoing active compromise from various types of threat actors. The class was intended for defenders wanting to learn how to effectively hunt threats in enterprise networks. #7 Hands-on Hardware Hacking Training: The class, taught by Joe Grand, took the students through the process of reverse engineering and defeating the security of electronic devices. The comprehensive training covered Product teardown Component identification Circuit board reverse engineering Soldering and desoldering Signal monitoring and analysis, and memory extraction, using a variety of tools including a logic analyzer, multimeter, and device programmer. It concluded with a final challenge where users identify, reverse engineer, and defeat the security mechanism of a custom embedded system. Users interested in hardware hacking, including security researchers, digital forensic investigators, design engineers, and executive management benefitted from this class. And that’s not all! Some other trainings include-- Software defined radio, a guide to threat hunting utilizing the elk stack and machine learning, AWS and Azure exploitation: making the cloud rain shells and much more. This is just a brief overview of the BlackHat USA 2018 conference, where we have handpicked a select few trainings. You can see the full schedule along with the list of selected research papers at the BlackHat Website. And if you missed out this one, fret not. There is another conference happening soon from 3rd December to 6th December 2018. Check out the official website for details. Top 5 cybersecurity trends you should be aware of in 2018 Top 5 cybersecurity myths debunked A new WPA/WPA2 security attack in town: Wi-fi routers watch out!  
Read more
  • 0
  • 0
  • 21800
article-image-introducing-pyoxidizer-an-open-source-utility-for-producing-standalone-python-applications-written-in-rust
Bhagyashree R
26 Jun 2019
4 min read
Save for later

Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust

Bhagyashree R
26 Jun 2019
4 min read
On Monday, Gregory Szorc, a Developer Productivity Engineer at Airbnb, introduced PyOxidizer, a Python application packaging and distribution tool written in Rust. This tool is available for Windows, macOS, and Linux operating systems. Sharing his vision behind this tool, Szorc wrote in the announcement, “I want PyOxidizer to provide a Python application packaging and distribution experience that just works with a minimal cognitive effort from Python application maintainers.” https://twitter.com/indygreg/status/1143187250743668736 PyOxidizer aims to solve complex packaging and distribution problems so that developers can put their efforts into building applications instead of juggling with build systems and packaging tools. According to the GitHub README, “PyOxidizer is a collection of Rust crates that facilitate building libraries and binaries containing Python interpreters.” Its most visible component is the ‘pyoxidizer’ command line tool. With this tool, you can create new projects, add PyOxidizer to existing projects, produce binaries containing a Python interpreter, and various related functionality. How PyOxidizer is different from other Python application packaging/distribution tools PyOxidizer provides the following benefits over other Python application packaging/distribution tools: It works across all popular platforms, unlike many other tools that only target Windows or macOS. It works even if the executing system does not have Python installed. It does not have special system requirements like SquashFS, container runtimes, etc. Its startup performance is comparable to traditional Python execution. It supports single file executables with minimal or none system dependencies. Here are some of the features PyOxidizer comes with: Generates a standalone single executable file One of the most important features of PyOxidizer is that it can produce a single executable file that contains a fully-featured Python interpreter, its extensions, standard library, and your application's modules and resources. PyOxidizer embeds self-contained Python interpreters as a tool and software library by exposing its lower-level functionality. Serves as a bridge between Rust and Python The ‘Oxidizer’ part in PyOxidizer comes from Rust. Internally, it uses Rust to produce executables and manage the embedded Python interpreter and its operations. Along with solving the problem of packaging and distribution with Rust, PyOxidizer can also serve as a bridge between these two languages. This makes it possible to add a Python interpreter to any Rust project and vice versa. With PyOxidizer, you can bootstrap a new Rust project that contains an embedded version of Python and your application. “Initially, your project is a few lines of Rust that instantiates a Python interpreter and runs Python code. Over time, the functionality could be (re)written in Rust and your previously Python-only project could leverage Rust and its diverse ecosystem,” explained Szorc. The creator chose Rust for the run-time and build-time components because it is considered to be one of the superior systems programming languages and does not require considerable effort solving difficult problems like cross-compiling. He believes that implementing the embedding component in Rust also opens more opportunities to embed Python in Rust programs. “This is largely an unexplored area in the Python ecosystem and the author hopes that PyOxidizer plays a part in more people embedding Python in Rust,” he added. PyOxidizer executables are faster to start and import During the execution, binaries built with PyOxidizer does not have to do anything special like creating a temporary directory to run the Python interpreter. Everything is loaded directly from the memory without any explicit I/O operations. So, when a Python module is imported, its bytecode is loaded from a memory address in the executable using zero-copy. This results in making the executables produced by PyOxidizer faster to start and import. PyOxidizer is still in its early stages. Yesterday’s initial release is good at producing executables embedding Python. However, not much has been implemented yet to solve the distribution part of the problem. Some of the missing features that we can expect to come in the future are an official build environment, support for C extensions, more robust packaging support, easy distribution, and more. The creator encourages Python developers to try this tool and share feedback with him or file an issue on GitHub. You can also contribute to this project via Patreon or PayPal. Many users are excited to try this tool: https://twitter.com/kevindcon/status/1143750501592211456 https://twitter.com/acemarke/status/1143389113871040517 Read the announcement made by Szorc to know more in detail. Python 3.8 beta 1 is now ready for you to test PyPI announces 2FA for securing Python package downloads Matplotlib 3.1 releases with Python 3.6+ support, secondary axis support, and more
Read more
  • 0
  • 0
  • 21798

article-image-deepminds-alphastar-ai-agent-will-soon-anonymously-play-with-european-starcraft-ii-players
Sugandha Lahoti
11 Jul 2019
4 min read
Save for later

DeepMind's Alphastar AI agent will soon anonymously play with European StarCraft II players

Sugandha Lahoti
11 Jul 2019
4 min read
Earlier this year, DeepMind’s AI Alphastar defeated two professional players at StarCraft II, a real-time strategy video game. Now, European Starcraft II players will get a chance to face off experimental versions of AlphaStar, as part of ongoing research into AI. https://twitter.com/MaxBakerTV/status/1149067938131054593 AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones. Anyone who wants to participate in this experiment will have to opt into the chance to play against the StarCraft II program. There will be an option provided in the in-game pop-up window. Users can alter their opt-in selection at any time. To ensure anonymity, all games will be blind test matches. European players that opt-in won't know if they've been matched up against AlphaStar. This will help ensure that all games are played under the same conditions, as players may tend to react differently when they know they’re against an AI. A win or a loss against AlphaStar will affect a player’s MMR (Matchmaking Rating) like any other game played on the ladder. "DeepMind is currently interested in assessing AlphaStar’s performance in matches where players use their usual mix of strategies," Blizzard said in its blog post. "Having AlphaStar play anonymously helps ensure that it is a controlled test, so that the experimental versions of the agent experience gameplay as close to a normal 1v1 ladder match as possible. It also helps ensure all games are played under the same conditions from match to match." Some people have appreciated the anonymous testing feature. A Hacker News user commented, “Of course the anonymous nature of the testing is interesting as well. Big contrast to OpenAI's public play test. I guess it will prevent people from learning to exploit the bot's weaknesses, as they won't know they are playing a bot at all. I hope they eventually do a public test without the anonymity so we can see how its strategies hold up under focused attack.” Others find it interesting to see what happens if players know they are playing against AlphaStar. https://twitter.com/hardmaru/status/1149104231967842304   AlphaStar will play in Starcraft’s three in-universe races (Terran, Zerg, or Protoss). Pairings on the ladder will be decided according to normal matchmaking rules, which depend on how many players are online while AlphaStar is playing. It will not be learning from the games it plays on the ladder, having been trained from human replays and self-play. The Alphastar will also use a camera interface and more restricted APMs. Per the blog post, “AlphaStar has built-in restrictions, which cap its effective actions per minute and per second. These caps, including the agents’ peak APM, are more restrictive than DeepMind’s demonstration matches back in January, and have been applied in consultation with pro players.” https://twitter.com/Eric_Wallace_/status/1148999440121749504 https://twitter.com/Liquid_MaNa/status/1148992401157054464   DeepMind will be benchmarking the performance of a number of experimental versions of AlphaStar to enable DeepMind to gather a broad set of results during the testing period. DeepMind will use a player’s replays and the game data (skill level, MMR, the map played, race played, time/date played, and game duration) to assess and describe the performance of the AlphaStar system. However, Deepmind will remove identifying details from the replays including usernames, user IDs and chat histories. Other identifying details will be removed to the extent that they can be without compromising the research DeepMind is pursuing. For now, AlphaStar agents will play only in Europe. The research results will be released in a peer-reviewed scientific paper along with replays of AlphaStar’s matches. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 21796
Modal Close icon
Modal Close icon