Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-digging-deeper
Packt
03 Jan 2017
6 min read
Save for later

Digging Deeper

Packt
03 Jan 2017
6 min read
In the article by, Craig Clayton, author of the book, iOS 10 Programming for Beginners, we went over the basics of Swift to get you warmed up. Now, we will dig deeper and learn some more programming concepts. These concepts will build on what you have already learned. In this article, we will cover the following topics: Ranges Control flow (For more resources related to this topic, see here.) Creating a Playground project Please launch Xcode and click on Get started with a playground. The options screen for creating a new playground screen will appear: Please name your new Playground SwiftDiggingDeeper and make sure that your Platform is set to iOS. Now, let's delete everything inside of your file and toggle on the Debug panel using either the toggle button or Cmd + Shift + Y. Your screen should look like mine: Ranges These generic data types represent a sequence of numbers. Let's look at the image below to understand: Closed range Notice that, in the image above, we have numbers ranging from 10 to 20. Rather than having to write each value, we can use ranges to represent all of these numbers in shorthand form. In order to do this, let's remove all of the numbers in the image except 10 and 20. Now that we have removed those numbers, we need a way to tell Swift that we want to include all of the numbers that we just deleted. This is where the range operator (…) comes into play. So, in Playgrounds, let's create a constant called range and set it equal to 10...20: let range = 10...20 The range that we just entered says that we want the numbers between 10 and 20 as well as both 10 and 20 themselves. This type of range is known as a closed range. We also have what is called a half closed range. Half closed range Let's make another constant called half closed range and set it equal to 10..<20: let halfClosedRange = 10..<20 A half closed range is the same as a closed range except that the end value will not be included. In this example, that means that 10 through 19 will be included and 20 will be excluded. At this point, you will notice that your results panel just shows you CountableClosedRange(10...20) and CountableRange(10..<20). We cannot see all the numbers within the range. In order to see all the numbers, we need to use a loop. Control flow In Swift, we use a variety of control statements. For-In Loop One of the most common control statements is a for-in loop. A for-in loop allows you to iterate over each element in a sequence. Let's see what a for-loop looks like: for <value> in <sequence> { // Code here } So, we start our for-in loop with for, which is followed by <value>. This is actually a local constant (only the for-in loop can access it) and can be any name you like. Typically, you will want to give this value an expressive name. Next, we have in, which is followed by <sequence>. This is where we want to give it our sequence of numbers. Let's write the following into Playgrounds: for value in range { print("closed range - (value)") } Notice that, in our debug panel, we see all of the numbers we wanted in our range: Let's do the same for our variable halfClosedRange by adding the following: for index in halfClosedRange { print("half closed range - (index)") } In our debug panel, we see that we get the numbers 10 through 19. One thing to note is that these two for-in loops have different variables. In the first loop, we used value, and in the second one, we used index. You can make these whatever you choose. In addition, in the two examples, we used constants, but we could actually just use the ranges within the loop. Please add the following: for index in 0...3 { print("range inside - (index)") } Now, you will see 0 to 3 print inside the debug panel: What if you wanted numbers to go in reverse order? Let's input the following for-in loop: for index in (10...20).reversed() { print("reversed range - (index)") } We now have the numbers in descending order in our debug panel. When we add ranges into a for-in loop, we have to wrap our range inside parentheses so that Swift recognizes that our period before reversed() is not a decimal. The while Loop A while loop executes a Boolean expression at the start of the loop, and the set of statements run until a condition becomes false. It is important to note that while loops can be executed zero or more times. Here is the basic syntax of a while loop: while <condition> { // statement } Let's write a while loop in Playgrounds and see how it works. Please add the following: var y = 0 while y < 50 { y += 5 print("y:(y)") } So, this loop starts with a variable that begins at zero. Before the while loop executes, it checks to see if y is less than 50; and, if so, it continues into the loop. By using the += operator, we increment y by 5 each time. Our while loop will continue to do this until y is no longer less than 50. Now, let's add the same while loop after the one we created and see what happens: while y < 50 { y += 5 print("y:(y)") } You will notice that the second while loop never runs. This may not seem like it is important until we look at our next type of loop. The repeat-while loop A repeat-while loop is pretty similar to a while loop in that it continues to execute the set of statements until a condition becomes false. The main difference is that the repeat-while loop does not evaluate its Boolean condition until the end of the loop. Here is the basic syntax of a repeat-while loop: repeat { // statement } <condition> Let's write a repeat-while loop in Playgrounds and see how it works. Type the following into Playgrounds: var x = 0 repeat { x += 5 print("x:(x)") } while x < 100 print("repeat completed x: (x)") So, our repeat-while loop executes first and increments x by 5, and afterwards (as opposed to checking the condition before as with a while loop), it checks to see if x is less than 100. This means that our repeat-while loop will continue until the condition hits 100. But here is where it gets interesting. Let's add another repeat-while loop after the one we just created: repeat { x += 5 print("x:(x)") } while x < 100 print("repeat completed again x: (x)") This time, the repeat…while loop incremented to 105. This happens because the Boolean expression does not get evaluated until after it is incremented by 5. Knowing this behavior will help you pick the right loop for your situation. So far, we have looked at three loops: the for-in loop, the while loop, and the repeat-while loop. We will use the for-in loop again, but first we need to talk about collections. Summary The article summarizes Ranges and control flow using Xcode 8. Resources for Article: Further resources on this subject: Tools inTypeScript [article] Design with Spring AOP [article] Thinking Functionally [article]
Read more
  • 0
  • 0
  • 29494

article-image-walkthrough-storm-ui
Packt
04 Mar 2018
5 min read
Save for later

Walkthrough of Storm UI

Packt
04 Mar 2018
5 min read
In this article by Ankit Jain, the author of the book Mastering Apache Storm,This section we will seehow you how we can start the Storm UI daemon. However, before starting the Storm UI daemon, we assume that you have a running Storm cluster. The Storm cluster deployment steps are mentioned in previous step. NNow, go to the Storm home directory (cd $STORM_HOME) at the Leader Nimbus machine and run the following command to start the Storm UI daemon: $> cd $STORM_HOME $> bin/storm ui & (For more resources related to this topic, see here.) By default, the Storm UI starts on the 8080 port of the machine where it is started. Now, we will browse to the http://nimbus-node:8080 page to view the Storm UI, where Nnimbus-node is the IP address or hostname of the Nimbus machine. The following is a screenshot of the Storm home page: Insert Image B06182_Article02_0103.png Cluster Summary Section This portion of the Storm UI shows the version of Storm deployed in a cluster, uptime of the Nnimbus nodes, number of free worker slots, number of used worker slots, and so on. While submitting a topology to the cluster, the user first needs to make sure that the value of the Free slots column should not be zero; otherwise, the topology doesn't get any worker for processing and will wait in the queue till a worker becomes free. Nimbus Summary section This portion of the Storm UI shows the number of Nimbus processes are running in Storm Cluster. The section also shows Status of Nimbus nodes. A node with Status "Leader" is an Active master while the node with Status is "Not a Leader" is a Passive master. Supervisor Summary section This portion of the Storm UI shows the list of supervisor nodes running in the cluster along with their Id, Host, Uptime, Slots, and Used slots columns. Nimbus Configuration section This portion of the Storm UI shows the configuration of the Nimbus node. Some of the important properties are: supervisor.slots.ports storm.zookeeper.port storm.zookeeper.servers storm.zookeeper.retry.interval worker.childopts supervisor.childopts The definition of each of this property are covert in chapter 3. Following is the screenshots of Nimbus Configuration: Topology Summary section This portion of the Storm UI shows the list of topologies running in the Storm cluster along with their ID, number of workers assigned to the topology, number of executors, number of tasks, uptime, and so on. Let's deploy the sample topology (if not running already) in a remote Storm cluster by running the following command: $> cd $STORM_HOME $> bin/storm jar ~/storm_example-0.0.1-SNAPSHOT-jar-with-dependencies.jar com.stormadvance.storm_example.SampleStormClusterTopology storm_example We have created SampleStormClusterTopology topology by defining three worker processes, two executors for SampleSpout, and four executors for SampleBolt. The information about the worker, executor and task is mentioned in next chapter. After submitting SampleStormClusterTopology on the Storm cluster, the user has to refresh the Storm home page. The following screenshot shows that the row is added for SampleStormClusterTopology in the Topology summary section. The topology section contains the name of the topology, unique ID of the topology, status of the topology, uptime, number of workers assigned to the topology, and so on. The possible values of status fields are ACTIVE, KILLED, and INACTIVE. Let's click on SampleStormClusterTopology to view its detailed statistics. I am attaching there are two screenshots for the same. The first one content the information about the number of numberworkers, executors, and tasks assigned to SampleStormClusterTopology topology The next screenshot image contains information about spout and bolts— The number of executors and tasks assigned to each spout and bolt: The information showed in the previous screenshots are: Topology stats: This section will give the information about the number of tuples emitted, transferred, and acknowledged, the capacity latency, and so on, within the window of 10 minutes, 3 hours, 1 day, and since the start of the topology. Spouts (All time): This section shows the statistics of all the spouts running inside a topology.The detailed information of Spout stats is mentioned in chapter 3. Bolts (All time): This section shows the statistics of all the bolts running inside a topology. The detailed information of Bolt stats is mentioned in chapter 3. Topology actions: This section allows us to perform activate, deactivate, rebalance, kill, and other operation on topology's directly through the Storm UI. Deactivate: Click on deactivate to deactivate the topology. Once the topology is deactivate, the spout stopped emitting tuples and the status of topology changed to INACTIVE on storm UI  Deactivating the topology does not free the Storm resource. Activate: Click on Activate button to activate the topology. Once the topology is activate, the spout again started emitting tuples. Kill: Click on Kill button to destroy/kill the topology. Once the topology is killed, it will free all the Storm resource allotted to this topology. While killing the topology, Storm will first deactivate the spouts and wait for the kill time mentioned on the alerts box, so the bolts have a chance to finish the processing of the tuples emitted by spouts before the kill command. The following screenshot shows how we can kill the topology through the Storm UI: Let's go to the Storm UI's home page to check the status of SampleStormClusterToplogy, as shown in the following screenshot: Summary We have seen how to start Storm UI daemon and we have also seen Storm home page with its sections Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 29398

article-image-5-key-skills-for-web-and-app-developers-to-learn-in-2020
Richard Gall
20 Dec 2019
5 min read
Save for later

5 Key skills for web and app developers to learn in 2020

Richard Gall
20 Dec 2019
5 min read
Web and application development can change quickly. Much of this is driven by user behavior and user needs - and if you can’t keep up with it, it’s going to be impossible to keep your products and projects relevant. The only way to do that, of course, is to ensure your skills are up to date and constantly looking forward to what might be coming. You can’t predict the future, but you can prepare yourself. Here are 5 key skill areas that we think web and app developers should focus on in 2020. Artificial intelligence It’s impossible to overstate the importance of AI in application development at the moment. Yes, it’s massively hyped, but that’s largely because its so ubiquitous. Indeed, to a certain extent many users won’t even realise their interacting with AI or machine learning systems. You might even say that that’s when its used best. The ways in which AI can be used by web and app developers is extensive and constantly growing. Perhaps the most obvious is personal recommendations, but it’s chatbots and augmented reality that are really pushing the boundaries of what’s possible with AI in the development field. Artificial intelligence might sound daunting if you’re primarily a web developer. But it shouldn’t - you don’t need a computer science or math degree to use it effectively. There are now many platforms and tools available to use machine learning technology out of the box, from Azure’s Cognitive Services, Amazon’s Rekognition, and ML Kit, built by Google for mobile developers. Learn how to build smart, AI-backed applications with Azure Cognitive Services in Azure Cognitive Services for Developers [Video]. New programming languages Earlier this year I wrote about how polyglot programming (being able to use more than one language) “allows developers to choose the right language to solve tough engineering problems.” For web and app developers, who are responsible for building increasingly complex applications and websites, in as elegant and as clean a manner as possible, this is particularly true. The emergence of languages like TypeScript and Kotlin attest to the importance of keeping your programming proficiency up to date. Moreover, you could even say that they highlight that, however popular core languages like JavaScript and Java are, there are now some tasks that they’re just not capable of dealing with. So, this doesn’t mean you should just ditch your favored programming languages in 2020. But it does mean that learning a new language is a great way to build your skill set. Explore new programming languages with eBook and video bundles here. Accessibility Web accessibility is a topic that has been overlooked for too long. That needs to change in 2020. It’s not hard to see how it gets ignored. When the pressure to deliver software is high, thinking about the consequences of specific design decisions on different types of users, is almost certainly going to be pushed to the bottom of developer’s priorities. But if anything this means we need a two-pronged approach - on the one hand developers need to commit to learning web accessibility themselves, but they also need to be evangelists in communicating its importance to non-technical team members. The benefits of this will be significant: it could be a big step towards a more inclusive digital world, but from a personal perspective, it will also help developers to become more aware and well-rounded in their design decisions. And insofar as no one’s taking real leadership for this at the moment, it’s the perfect opportunity for developers to prove their leadership chops. Read next: It’s a win for Web accessibility as courts can now order companies to make their sites WCAG 2.0 compliant JAMStack and (sort of) static websites Traditional CMSes like WordPress can be a pain for developers if you want to build something that is more customized than what you get out of the box. This is one of the reasons why JAMStack (a term coined by Netlify) is so popular - combining JavaScript, APIs, and markup, it offers web developers a way to build secure, performant websites very quickly. To a certain extent, JAMstack is the next generation of static websites; but JAMStack sites aren’t exactly static, as they call data from the server-side through APIs. Developers then call on the help of templated markup - usually in the form of static site generators (like Gatsby.js) or build tools - to act as a pre-built front end. The benefits of JAMstack as an approach are well-documented. Perhaps the most important, though, is that it offers a really great developer experience. It allows you to build with the tools that you want to use, and integrate with services you might already be using, and minimizes the level of complexity that can come with some development approaches. Get started with Gatsby.js and find out how to use it in JAMstack with The Gatsby Masterclass video. State management We’ve talked about state management recently - in fact, lots of people have been talking about it. We won’t go into detail about what it involves, but the issue has grown as increasing app complexity has made it harder to gain a single source of truth on what’s actually happening inside our applications. If you haven’t already, it’s essential to learn some of the design patterns and approaches for managing application state that have emerged over the last couple of years. The two most popular - Flux and Redux - are very closely associated with React, but for Vue developers Vuex is well worth learning. Thinking about state management can feel brain-wrenching at times. However, getting to grips with it can really help you to feel more in control of your projects. Get up and running with Redux quickly with the Redux Quick Start Guide.
Read more
  • 0
  • 0
  • 29370

article-image-saving-backups-on-cloud-services-with-elasticsearch-plugins
Savia Lobo
29 Dec 2017
9 min read
Save for later

Saving backups on cloud services with ElasticSearch plugins

Savia Lobo
29 Dec 2017
9 min read
[box type="note" align="" class="" width=""]Our article is a book excerpt taken from Mastering Elasticsearch 5.x. written by Bharvi Dixit. This book guides you through the intermediate and advanced functionalities of Elasticsearch, such as querying, indexing, searching, and modifying data. In other words, you will gain all the knowledge necessary to master Elasticsearch and put into efficient use.[/box] This article will explain how Elasticsearch, with the help of additional plugins, allows us to push our data outside of the cluster to the cloud. There are three possibilities where our repository can be located, at least using officially supported plugins: The S3 repository: AWS The HDFS repository: Hadoop clusters The GCS repository: Google cloud services The Azure repository: Microsoft's cloud platform Let's go through these repositories to see how we can push our backup data on the cloud services. The S3 repository The S3 repository is a part of the Elasticsearch AWS plugin, so to use S3 as the repository for snapshotting, we need to install the plugin first on every node of the cluster and each node must be restarted after the plugin installation: sudo bin/elasticsearch-plugin install repository-s3 After installing the plugin on every Elasticsearch node in the cluster, we need to alter their configuration (the elasticsearch.yml file) so that the AWS access information is available. The example configuration can look like this: cloud: aws: access_key: YOUR_ACCESS_KEY secret_key: YOUT_SECRET_KEY To create the S3 repository that Elasticsearch will use for snapshotting, we need to run a command similar to the following one: curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{ "type": "s3", "settings": { "bucket": "bucket_name" } }' The following settings are supported when defining an S3-based repository: bucket: This is the required parameter describing the Amazon S3 bucket to which the Elasticsearch data will be written and from which Elasticsearch will read the data. region: This is the name of the AWS region where the bucket resides. By default, the US Standard region is used. base_path: By default, Elasticsearch puts the data in the root directory. This parameter allows you to change it and alter the place where the data is placed in the repository. server_side_encryption: By default, encryption is turned off. You can set this parameter to true in order to use the AES256 algorithm to store data. chunk_size: By default, this is set to 1GB and specifies the size of the data chunk that will be sent. If the snapshot size is larger than the chunk_size, Elasticsearch will split the data into smaller chunks that are not larger than the size specified in the chunk_size. The chunk size can be specified in size notations such as 1GB, 100mb, and 1024kB. buffer_size: The size of this buffer is set to 100mb by default. When the chunk size is greater than the value of the buffer_size, Elasticsearch will split it into buffer_size fragments and use the AWS multipart API to send it. The buffer size cannot be set lower than 5 MB because it disallows the use of the multipart API. endpoint: This defaults to AWS's default S3 endpoint. Setting a region overrides the endpoint setting. protocol: Specifies whether to use http or https. It default to cloud.aws.protocol or cloud.aws.s3.protocol. compress: Defaults to false and when set to true. This option allows snapshot metadata files to be stored in a compressed format. Please note that index files are already compressed by default. read_only: Makes a repository to be read only. It defaults to false. max_retries: This specifies the number of retries Elasticsearch will take before giving up on storing or retrieving the snapshot. By default, it is set to 3. In addition to the preceding properties, we are allowed to set two additional properties that can overwrite the credentials stored in elasticsearch.yml, which will be used to connect to S3. This is especially handy when you want to use several S3 repositories, each with its own security settings: access_key: This overwrites cloud.aws.access_key from elasticsearch.yml secret_key: This overwrites cloud.aws.secret_key from elasticsearch.yml    Note:    AWS instances resolve S3 endpoints to a public IP. If the Elasticsearch instances reside in a private subnet in an AWS VPC then all traffic to S3 will go through that VPC's NAT instance. If your VPC's NAT instance is a smaller instance size (for example, a t1.micro) or is handling a high volume of network traffic, your bandwidth to S3 may be limited by that NAT instance's networking bandwidth limitations. So, if you running your Elasticsearch cluster inside a VPC then make sure that you are using instances with a high networking bandwidth and there is no network congestion. Note:  Instances residing in a public subnet in an AWS VPC will   connect to S3 via the VPC's Internet gateway and not be bandwidth limited by the VPC's NAT instance. The HDFS repository If you use Hadoop and its HDFS (http://wiki.apache.org/hadoop/HDFS) filesystem, a good alternative to back up the Elasticsearch data is to store it in your Hadoop cluster. As with the case of S3, there is a dedicated plugin for this. To install it, we can use the following  command: sudo bin/elasticsearch-plugin install repository-hdfs   Note :    The HDFS snapshot/restore plugin is built against the latest Apache Hadoop 2.x (currently 2.7.1). If your Hadoop distribution is not protocol compatible with Apache Hadoop, you can replace the Hadoop libraries inside the plugin folder with your own (you might have to adjust the security permissions required). Note:  Even if Hadoop is already installed on the Elasticsearch nodes, for security        reasons, the required libraries need to be placed under the plugin folder. Note that in most cases, if the distribution is compatible, one simply needs to configure the repository with the appropriate Hadoop configuration files. After installing the plugin on each node in the cluster and restarting every node, we can use the following command to create a repository in our Hadoop cluster: curl -XPUT 'http://localhost:9200/_snapshot/es_hdfs_repository' -d '{ "type": "hdfs" "settings": { "uri": "hdfs://namenode:8020/", "path": "elasticsearch_snapshots/es_hdfs_repository" } }' The available settings that we can use are as follows: uri: This is a required parameter that tells Elasticsearch where HDFS resides. It should have a format like hdfs://HOST:PORT/. path: This is the information about the path where snapshot files should be stored. It is a required parameter. load_default: This specifies whether the default parameters from the Hadoop configuration should be loaded and set to false if the reading of the settings should be disabled. This setting is enabled by default. chunk_size: This specifies the size of the chunk that Elasticsearch will use to split the snapshot data. If you want the snapshotting to be faster, you can use smaller chunks and more streams to push the data to HDFS. By default, it is disabled. conf.<key>: This is an optional parameter and tells where a key is in any Hadoop argument. The value provided using this property will be merged with the configuration. As an alternative, you can define your HDFS repository and its settings inside the elasticsearch.yml file of each node as follows: repositories: hdfs: uri: "hdfs://<host>:<port>/" path: "some/path" load_defaults: "true" conf.<key> : "<value>" compress: "false" chunk_size: "10mb" The Azure repository Just like Amazon S3, we are able to use a dedicated plugin to push our indices and metadata to Microsoft cloud services. To do this, we need to install a plugin on every node of the cluster, which we can do by running the following command: sudo bin/elasticsearch-plugin install repository-azure The configuration is also similar to the Amazon S3 plugin configuration. Our elasticsearch.yml file should contain the following section: cloud: azure: storage: my_account: account: your_azure_storage_account key: your_azure_storage_key Do not forget to restart all the nodes after installing the plugin. After Elasticsearch is configured, we need to create the actual repository, which we do by running the following command: curl -XPUT 'http://localhost:9200/_snapshot/azure_repository' -d '{ "type": "azure" }' The following settings are supported by the Elasticsearch Azure plugin: account: Microsoft Azure account settings to be used. container: As with the bucket in Amazon S3, every piece of information must reside in the container. This setting defines the name of the container in the Microsoft Azure space. The default value is elasticsearch-snapshots. base_path: This allows us to change the place where Elasticsearch will put the data. By default, the value for this setting is empty which causes Elasticsearch to put the data in the root directory. compress: This defaults to false and when enabled it allows us to compress the metadata files during the snapshot creation. chunk_size: This is the maximum chunk size used by Elasticsearch (set to 64m by default, and this is also the maximum value allowed). You can change it to change the size when the data should be split into smaller chunks. You can change the chunk size using size value notations such as, 1g, 100m, or 5k.  An example of creating a repository using the settings follows: curl -XPUT "http://localhost:9205/_snapshot/azure_repository" -d' { "type": "azure", "settings": { "container": "es-backup-container", "base_path": "backups", "chunk_size": "100m", "compress": true } }' The Google cloud storage repository Similar to Amazon S3 and Microsoft Azure, we can use a GCS repository plugin for snapshotting and restoring of our indices. The settings for this plugin are almost similar to other cloud plugins. To know how to work with the Google cloud repository plugin please refer to the following URL: https://www.elastic.co/guide/en/elasticsearch/plugins/5.0/repository-gcs.htm Thus, in the article we learn how to carry out backup of your data from Elasticsearch clusters to the cloud, i.e. within different cloud repositories by making use of the additional plugin options with Elasticsearch. If you found our excerpt useful, you may explore other interesting features and advanced concepts of Elasticsearch 5.x like aggregation, index control, sharding, replication, and clustering in the book Mastering Elasticsearch 5.x.
Read more
  • 0
  • 1
  • 29343

article-image-shopify-announces-fulfillment-network-video-and-3d-model-assets-custom-storefront-tools-and-more
Vincy Davis
20 Jun 2019
6 min read
Save for later

Shopify announces Fulfillment network, video and 3D model assets, custom storefront tools and more!

Vincy Davis
20 Jun 2019
6 min read
At the ongoing Shopify Unite 2019 conference, Shopify has announced a number of new products like Fulfillment network, video and 3D model assets, custom storefront tools, new online store design experience and others. Most of these products will be launched later this year. Here are the major highlights: Shopify Fulfillment Network Shopify Fulfillment Network, will be a dispersed network of fulfillment centers which uses machine learning to automatically select the optimal inventory quantities per location, as well as the closest fulfillment option for each of the customers’ shipments. Once available, it will be possible to simply install the app, select the products, get a quote, and begin selling the products. Entrepreneurs will be able to provide fast, low-cost delivery to their customers while maintaining ownership of customer data and a branded shipping experience. A single back office A merchants order, inventory, and customer data will stay synced and up-to-date across all warehouse locations and channels. Recommended warehouse locations To save costs on shipping, it will be possible to find the best locations, based on where the sales are coming from. Low stock alerts When inventory runs low, merchants will know when to replenish to continue meeting demands. 99.5% order accuracy The correct package will be chosen and out the door on time, with good accuracy. Hands-on warehouse help A dedicated account manager will assist the merchant in finding the best path, to reach their customers so the costs can be kept low. Shopify Fulfillment Network is currently available in the United States, and interested merchants can apply for early access. https://twitter.com/treklightgear/status/1141406217815724034 Native support to video and 3D model assets Shopify’s product will natively support video and 3D model assets, thus adding a new dimension to products and providing a richer purchase experience for customers. This feature is expected to be released later this year. Manage media through a single location It will now be possible to upload, access, and store video and 3D models from the same place where the images are being managed currently. Deploy through the new Shopify video player Users can use, one of the 10 starter themes, to easily display video 3D models using the new Shopify player for video or the viewer for Shopify AR. New editor apps Shopify is inviting partners and developers to create additional apps and custom integrations to open up new ways, to create and modify images, videos, and AR experiences. https://twitter.com/Scobleizer/status/1141456478496161797 https://twitter.com/thewakdesigns/status/1141607172670926848 New Online Store Design Experience The new online store design will provide entrepreneurs, with more options for customization. This will give them more control over the layout and aesthetic of their store. This feature is expected to be released later this year. Easier customization at the page and store level Any page can now be customized by using sections, just like the homepage. It is also now possible for users to save time by setting content on multiple pages, using master pages. Portable content that moves with you From now on, users will not have to make a duplicate of their theme or move content over manually. The shop’s content will follow the owner so that they are able to make changes, like downloading a new version or trying out a new one. A new workspace to update your store It will now be possible to edit and preview updates before publishing. Any minor tweak or major changes can be done in a new space to draft changes. Custom storefront tools Shopify’s Storefront API allows customers to use the custom storefront tools to free their storefront from certain back-end dependencies. This gives customers the flexibility to sell anywhere and in any way they want. This feature is more exciting for more complex or niche businesses that use the web, mobile, gaming, and other interactive mediums as storefronts to reach their customers. Shopify merchants have already started to use the Storefront API’s. For example, NTWRK hosted a live stream shopping show using Shopify’s Storefront API. Connect microservices to create personalized experiences Third-party shipping services can be used for blogs, storefront, and product pages or accurate shipment alerts Turn the world into your storefront Entrepreneurs can engage with their customers through vending machines, live streams, smart mirrors, voice shopping, and more. Speedy and scalable to have development teams work in parallel The flexible architecture will enable the development teams to create the experiences and storefronts, according to their vision. Interested merchants can create custom experiences on the customer storefront tools website. https://twitter.com/CarloTeran/status/1141475420904157185 Customer loyalty with retail shoppers With the new Shopify Point of Sale cart app extensions, users can apply and edit loyalty and promotional details directly from the customer cart. Apply discounts lightning-quick The number of clicks needed to apply a discount has been reduced to one, thus saving users 10 seconds for each sale on average. Important information at your fingertips Key customer details, from birthdays to reward milestones will surface automatically and in-context, so the merchants or their staff won’t have to navigate to the apps to get alerts. More flexibility Shopify’s app partners will give merchants the flexibility to pick a program that works best for their customer experience, like rewarding a long-time customer, online or in-store. Interested merchants can learn more about the apps on the POS Loyalty and Promotion Apps website. https://twitter.com/ShawnBouchard/status/1141483355252199425 Shopify Payments in multiple currencies and languages From this year onwards, merchants can run their business in their own preferred language. Shopify is already available in French, German, Japanese, Italian, Brazilian Portuguese, and Spanish. Additionally, Shopify will be available in 11 additional languages like Dutch, Simplified Chinese, and more. Also, Shopify Payments will enable selling in multiple currencies and will be globally available to all Shopify merchants later this year. The displayed prices will use simple rounding rules and automatically adjust based on current foreign -exchange rates. Soon shoppers will be able to convert between nine major currencies -GBP, AUD, CAD, EUR, HKD, JPY, NZD, SGD, and USD, thus using their preferred way to pay. https://twitter.com/anthonycook/status/1141392464818900996 Users of Shopify are obviously quite elated with all the announcements. People also touted this as Shopify’s way to combat Amazon, its main competitor in the market. https://twitter.com/tomfgoodwin/status/1141434508257964032 Visit the Shopify blog, for more details. Read More Why Retailers need to prioritize eCommerce Automation in 2019 5 things to consider when developing an eCommerce website Through the customer’s eyes: 4 ways Artificial Intelligence is transforming ecommerce
Read more
  • 0
  • 0
  • 29292

article-image-machine-learning-algorithms-naive-bayes-with-spark-mllib
Wilson D'souza
07 Nov 2017
7 min read
Save for later

Machine Learning Algorithms: Implementing Naive Bayes with Spark MLlib

Wilson D'souza
07 Nov 2017
7 min read
[box type="note" align="" class="" width=""]In this article by Siamak Amirghodsi, Meenakshi Rajendran, Broderick Hall, and Shuen Mei from their book Apache Spark 2.x Machine Learning Cookbook, we look at how to implement Naïve Bayes classification algorithm with Spark 2.0 MLlib. The associated code and exercise are available at the end of the article.[/box] How to implement Naive Bayes with Spark MLlib Naïve Bayes is one of the most widely used classification algorithms which can be trained and optimized quite efficiently. Spark’s machine learning library, MLlib, primarily focuses on simplifying machine learning and has great support for multinomial naïve Bayes and Bernoulli naïve Bayes. Here we use the famous Iris dataset and use Apache Spark API NaiveBayes() to classify/predict which of the three classes of flower a given set of observations belongs to. This is an example of a multi-class classifier and requires multi-class metrics for measurements of fit. Let’s have a look at the steps to achieve this: For the Naive Bayes exercise, we use a famous dataset called iris.data, which can be obtained from UCI. The dataset was originally introduced in the 1930s by R. Fisher. The set is a multivariate dataset with flower attribute measurements classified into three groups. In short, by measuring four columns, we attempt to classify a species into one of the three classes of Iris flower (that is, Iris Setosa, Iris Versicolour, Iris Virginica).We can download the data from here: https://archive.ics.uci.edu/ml/datasets/Iris/  The column definition is as follows: Sepal length in cm Sepal width in cm Petal length in cm Petal width in cm  Class: -- Iris Setosa => Replace it with 0 -- Iris Versicolour => Replace it with 1 -- Iris Virginica => Replace it with 2 The steps/actions we need to perform on the data are as follows: Download and then replace column five (that is, the label or classification classes) with a numerical value, thus producing the iris.data.prepared data file. The Naïve Bayes call requires numerical labels and not text, which is very common with most tools. Remove the extra lines at the end of the file. Remove duplicates within the program by using the distinct() call. Start a new project in IntelliJ or in an IDE of your choice. Make sure that the necessary JAR files are included. Set up the package location where the program will reside: package spark.ml.cookbook.chapter6 Import the necessary packages for SparkSession to gain access to the cluster and Log4j.Logger to reduce the amount of output produced by Spark:  import org.apache.spark.mllib.linalg.{Vector, Vectors} import org.apache.spark.mllib.regression.LabeledPoint import org.apache.spark.mllib.classification.{NaiveBayes, NaiveBayesModel} import org.apache.spark.mllib.evaluation.{BinaryClassificationMetrics, MulticlassMetrics, MultilabelMetrics, binary} import org.apache.spark.sql.{SQLContext, SparkSession} import org.apache.log4j.Logger import org.apache.log4j.Level Initialize a SparkSession specifying configurations with the builder pattern, thus making an entry point available for the Spark cluster: val spark = SparkSession .builder .master("local[4]") .appName("myNaiveBayes08") .config("spark.sql.warehouse.dir", ".") .getOrCreate() val data = sc.textFile("../data/sparkml2/chapter6/iris.data.prepared.txt") Parse the data using map() and then build a LabeledPoint data structure. In this case, the last column is the Label and the first four columns are the features. Again, we replace the text in the last column (that is, the class of Iris) with numeric values (that is, 0, 1, 2) accordingly: val NaiveBayesDataSet = data.map { line => val columns = line.split(',') LabeledPoint(columns(4).toDouble , Vectors.dense(columns(0).toDouble,columns(1).toDouble,columns(2).to Double,columns(3).toDouble )) } Then make sure that the file does not contain any redundant rows. In this case, it has three redundant rows. We will use the distinct dataset going forward: println(" Total number of data vectors =", NaiveBayesDataSet.count()) val distinctNaiveBayesData = NaiveBayesDataSet.distinct() println("Distinct number of data vectors = ", distinctNaiveBayesData.count()) Output: (Total number of data vectors =,150) (Distinct number of data vectors = ,147) We inspect the data by examining the output: distinctNaiveBayesData.collect().take(10).foreach(println(_)) Output: (2.0,[6.3,2.9,5.6,1.8]) (2.0,[7.6,3.0,6.6,2.1]) (1.0,[4.9,2.4,3.3,1.0]) (0.0,[5.1,3.7,1.5,0.4]) (0.0,[5.5,3.5,1.3,0.2]) (0.0,[4.8,3.1,1.6,0.2]) (0.0,[5.0,3.6,1.4,0.2]) (2.0,[7.2,3.6,6.1,2.5]) .............. ................ ............. Split the data into training and test sets using a 30% and 70% ratio. The 13L in this case is simply a seeding number (L stands for long data type) to make sure the result does not change from run to run when using a randomSplit() method: val allDistinctData = distinctNaiveBayesData.randomSplit(Array(.30,.70),13L) val trainingDataSet = allDistinctData(0) val testingDataSet = allDistinctData(1) Print the count for each set: println("number of training data =",trainingDataSet.count()) println("number of test data =",testingDataSet.count()) Output: (number of training data =,44) (number of test data =,103) Build the model using train() and the training dataset: val myNaiveBayesModel = NaiveBayes.train(trainingDataSet Use the training dataset plus the map() and predict() methods to classify the flowers based on their features: val predictedClassification = testingDataSet.map( x => (myNaiveBayesModel.predict(x.features), x.label)) Examine the predictions via the output: predictedClassification.collect().foreach(println(_)) (2.0,2.0) (1.0,1.0) (0.0,0.0) (0.0,0.0) (0.0,0.0) (2.0,2.0) ....... ....... ....... Use MulticlassMetrics() to create metrics for the multi-class classifier. As a reminder, this is different from the previous recipe, in which we used BinaryClassificationMetrics(): val metrics = new MulticlassMetrics(predictedClassification) Use the commonly used confusion matrix to evaluate the model: val confusionMatrix = metrics.confusionMatrix println("Confusion Matrix= n",confusionMatrix) Output: (Confusion Matrix= ,35.0 0.0 0.0 0.0 34.0 0.0 0.0 14.0 20.0 ) We examine other properties to evaluate the model: val myModelStat=Seq(metrics.precision,metrics.fMeasure,metrics.recall) myModelStat.foreach(println(_)) Output: 0.8640776699029126 0.8640776699029126 0.8640776699029126 How it works... We used the IRIS dataset for this recipe, but we prepared the data ahead of time and then selected the distinct number of rows by using the NaiveBayesDataSet.distinct() API. We then proceeded to train the model using the NaiveBayes.train() API. In the last step, we predicted using .predict() and then evaluated the model performance via MulticlassMetrics() by outputting the confusion matrix, precision, and F-Measure metrics. The idea here was to classify the observations based on a selected feature set (that is, feature engineering) into classes that correspond to the left-hand label. The difference here was that we are applying joint probability given conditional probability to the classification. This concept is known as Bayes' theorem, which was originally proposed by Thomas Bayes in the 18th century. There is a strong assumption of independence that must hold true for the underlying features to make Bayes' classifier work properly. At a high level, the way we achieved this method of classification was to simply apply Bayes' rule to our dataset. As a refresher from basic statistics, Bayes' rule can be written as follows: The formula states that the probability of A given B is true is equal to probability of B given A is true times probability of A being true divided by probability of B being true. It is a complicated sentence, but if we step back and think about it, it will make sense. The Bayes' classifier is a simple yet powerful one that allows the user to take the entire probability feature space into consideration. To appreciate its simplicity, one must remember that probability and frequency are two sides of the same coin. The Bayes' classifier belongs to the incremental learner class in which it updates itself upon encountering a new sample. This allows the model to update itself on-the-fly as the new observation arrives rather than only operating in batch mode. We evaluated a model with different metrics. Since this is a multi-class classifier, we have to use MulticlassMetrics() to examine model accuracy. [box type="download" align="" class="" width=""]Download exercise and code files here. Exercise Files_Implementing Naive Bayes algorithm with Spark MLlib[/box] For more information on Multiclass Metrics, please see the following link: http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.mllib .evaluation.MulticlassMetrics Documentation for constructor can be found here: http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.classification.NaiveBayes If you enjoyed this article, you should have a look at Apache Spark 2.0 Machine Learning Cookbook which contains this excerpt.
Read more
  • 0
  • 0
  • 29257
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-halloween-costume-data-science-nerds
Packt Editorial Staff
31 Oct 2017
14 min read
Save for later

(13*3)+ Halloween costume ideas for Data science nerds

Packt Editorial Staff
31 Oct 2017
14 min read
Are you a data scientist, a machine learning engineer, an AI researcher or simply a data enthusiast? Channel the inner data science nerd within you with these geeky ideas for your Halloween costumes! The Data Science Spectrum Don't know what to go as to this evening's party because you've been busy cleaning that terrifying data? Don’t worry, here are some easy-to-put-together Halloween costume ideas just for you. [dropcap]1[/dropcap] Big Data Go as Baymax, the healthcare robot, (who can also turn into battle mode when required). Grab all white clothes that you have. Stuff your tummy with some pillows and wear a white mask with cutouts for eyes. You are all ready to save the world. In fact, convince a friend or your brother to go as Hiro! [dropcap]2[/dropcap] A.I. agent Enter as Agent Smith, the AI antagonist, this Halloween. Lure everyone with your bold black suit paired with a white shirt and a black tie. A pair of polarized sunglasses would replicate you as the AI agent. Capture the crowd by being the most intelligent and cold-hearted personality of all. [dropcap]3[/dropcap] Data Miner Put on your dungaree with a tee. Fix a flashlight atop your cap. Grab a pickaxe from the gardening toolkit, if you have one. Stripe some mud onto your face. Enter the party wheeling with loads of data boxes that you have freshly mined. You’ll definitely grab some traffic for data. Unstructured data anyone? [dropcap]4[/dropcap] Data Lake Go as a Data lake this Halloween. Simply grab any blue item from your closet. Draw some fishes, crabs, and weeds. (Use a child’s marker for that). After all, it represents the data you have. And you’re all set. [dropcap]5[/dropcap] Dark Data Unleash the darkness within your soul! Just kidding. You don’t actually have to turn to the evil side. Just coming up with your favorite black-costume character would do. Looking for inspiration? Maybe, a witch, The dark knight, or The Darth Vader. [dropcap]6[/dropcap] Cloud A fluffy, white cloud is what you need to be this Halloween. Raid your nearby drug store for loads of cotton balls. Better still, tear up that old pillow you have been meaning to throw away for a while. Use the fiber inside to glue onto an unused tee. You will be the cutest cloud ever seen. Don’t forget to carry an umbrella in case you turn grey! [dropcap]7[/dropcap] Predictive Analytics Make your own paper wizard hat with silver stars and moons pasted on it. If you can arrange for an advocate gown, it would be great. Else you could use a long black bed sheet as a cape. And most importantly, a crystal ball to show off some prediction stunts at the Halloween. [dropcap]8[/dropcap] Gradient boosting Enter Halloween as the energy booster. Wear what you want. Grab loads of empty energy drink tetra packs and stick it all over you. Place one on your head too. Wear a nameplate that says “ G-booster Energy drink”. Fuel up some weak models this Halloween. [dropcap]9[/dropcap] Cryptocurrency Wear head to toe black. In fact, paint your face black as well, like the Grim reaper. Then grab a cardboard piece. Cut out a circle, paint it orange, and then draw a gold B symbol, just like you see in a bitcoin. This Halloween costume will definitely grab you the much-needed attention just as this popular cryptocurrency. [dropcap]10[/dropcap] IoT Are you a fan of IoT and the massive popularity it has gained? Then you should definitely dress up as your web-slinging, friendly neighborhood Spiderman. Just grab a spiderman costume from any costume store and attach some handmade web slings. Remember to connect with people by displaying your IoT knowledge. [dropcap]11[/dropcap] Self-driving car Choose a mono-color outfit of your choice (P.S. The color you would choose for your car). Cut out four wheels and paste two on your lower calves and two on your arms. Cut out headlights too. Put on a wiper goggle. And yes you do not need a steering wheel or the brakes, clutch and the accelerator. Enter the Halloween at your own pace, go self-driving this Halloween. Bonus point: You can call yourself Bumblebee or Optimus Prime. Machine Learning and Deep learning Frameworks If machine learning or deep learning is your forte, here are some fresh Halloween costume ideas based on some of the popular frameworks in that space. [dropcap]12[/dropcap] Torch Flame up the party with a costume inspired by the fantastic four superhero, Johnny Storm a.k.a The Human Torch. Wear a yellow tee and orange slacks. Draw some orange flames on your tee. And finally, wear a flame-inspired headband. Someone is a hot machine learning library! [dropcap]13[/dropcap] TensorFlow No efforts for this one. Just arrange for a pumpkin costume, paste a paper cut-out of the TensorFlow logo and wear it as a crown. Go as the most powerful and widely popular deep learning library. You will be the star of the Halloween as you are a Google Kid. [dropcap]14[/dropcap] Caffe Go as your favorite Starbucks coffee this Halloween. Wear any of your brown dress/ tee. Draw or stick a Starbucks logo. And then add frothing to the top by bunching up a cream-colored sheet. Mamma Mia! [dropcap]15[/dropcap] Pandas Go as a Panda this Halloween! Better still go as a group of Pandas. The best option is to buy a panda costume. But if you don’t want that, wear a white tee, black slacks, black goggles and some cardboard cutouts for ears. This will make you not only the cutest animal in the party but also a top data manipulation library. Good luck finding your python in the party by the way. [dropcap]16[/dropcap] Jupyter Notebook Go as a top trending open-source web application by dressing up as the largest planet in our solar system. People would surely be intimidated by your mass and also by your computing power. [dropcap]17[/dropcap] H2O Go to Halloween as a world famous open source deep learning platform. No, no, you don’t have to go as the platform itself. Instead go as the chemical alter-ego, water. Wear all blue and then grab some leftover asymmetric, blue cloth pieces to stick at your sides. Thirsty anyone? Data Viz & Analytics Tools If you’re all about analytics and visualization, grab the attention of every data geek in your party by dressing up as your favorite data insight tools. [dropcap]18[/dropcap] Excel Grab an old white tee and paint some green horizontal stripes. You’re all ready to go as the most widely used spreadsheet. The simplest of costumes, yet the most useful - a timeless classic that never goes out of fashion. [dropcap]19[/dropcap] MatLab If you have seriously run out of all costume ideas, going out as MatLab is your only solution. Just grab a blue tablecloth. Stick or sew it with some orange curtain and throw it over your head. You’re all ready to go as the multi-paradigm numerical computing environment. [dropcap]20[/dropcap] Weka Wear a brown overall, a brown wig, and paint your face brown. Make an orange beak out of a chart paper, and wear a pair orange stockings/ socks with your trousers tucked in. You are all set to enter as a data mining bird with ML algorithms and Java under your wings. [dropcap]21[/dropcap] Shiny Go all Shimmery!! Get some glitter powder and put it all over you. (You’ll have a tough time removing it though). Else choose a glittery outfit, with glittery shoes, and touch-up with some glitter on your face. Let the party see the bling of R that you bring. You will be the attractive storyteller out there. [dropcap]22[/dropcap] Bokeh A colorful polka-dotted outfit and some dim lights to do the magic. You are all ready to grab the show with such a dazzle. Make sure you enter the party gates with Python. An eye-catching beauty with the beast pair. [dropcap]23[/dropcap] Tableau Enter the Halloween as one of your favorite characters from history. But there is a term and condition for this: You cannot talk or move. Enjoy your Halloween by being still. Weird, but you’ll definitely grab everyone’s eye. [dropcap]24[/dropcap] Microsoft Power BI Power up your Halloween party by entering as a data insights superhero. Wear a yellow turtleneck, a stylish black leather jacket, black pants, some mid-thigh high boots and a slick attitude. You’re ready to save your party! Data Science oriented Programming languages These hand-picked Halloween costume ideas are for you if you consider yourself a top coder. By a top coder we mean you’re all about learning new programming languages in your spare and, well, your not so spare time.   [dropcap]25[/dropcap] Python Easy peasy as the language looks, the reptile is not that easy to handle. A pair of python-printed shirt and trousers would do the job. You could be getting more people giving you candies some out of fear, other out of the ease. Definitely, go as a top trending and a go-to language which everyone loves! And yes, don’t forget the fangs. [dropcap]26[/dropcap] R Grab an eye patch and your favorite leather pants. Wear a loose white shirt with some rugged waistcoat and a sword. Here you are all decked up as a pirate for your next loot. You’ll surely thank me for giving you a brilliant Halloween idea. But yes! Don’t forget to make that Arrrr (R) noise! [dropcap]27[/dropcap] Java Go as a freshly roasted coffee bean! People in your Halloween party would be allured by your aroma. They would definitely compliment your unique idea and also the fact that you’re the most popular programming language. [dropcap]28[/dropcap] SAS March in your Halloween party up as a Special Airforce Service (SAS) agent. You would be disciplined, accurate, precise and smart. Just like the advanced software suite that goes by the same name. You would need a full black military costume, with a gas mask, some fake ammunition from a nearby toy store, and some attitude of course! [dropcap]29[/dropcap] SQL If you pride yourself on being very organized or are a stickler for the rules, you should go as SQL this Halloween. Prep-up yourself with an overall blue outfit. Spike up your hair and spray some temporary green hair color. Cut out bold letters S, Q, and L from a plain white paper and stick them on your chest. You are now ready to enter the Halloween party as the most popular database of all times. Sink in all the data that you collect this Halloween. [dropcap]30[/dropcap] Scala If Scala is your favorite programming language, add a spring to your Halloween by going as, well, a spring! Wear the brightest red that you have. Using a marker, draw some swirls around your body (You can ask your mom to help). Just remember to elucidate a 3D picture. And you’re all set. [dropcap]31[/dropcap] Julia If you want to make a red carpet entrance to your Halloween party, go as the Academy award-winning actress, Julia Roberts. You can even take up inspiration from her character in the 90s hit film Pretty Woman. For extra oomph, wear a pink, red, and purple necklace to highlight the Julia programming language [dropcap]32[/dropcap] Ruby Act pricey this Halloween. Be the elegant, dynamic yet simple programming language. Go blood red, wear on your brightest red lipstick, red pumps, dazzle up with all the red accessories that you have. You’ll definitely gather some secret admirers around the hall. [dropcap]33[/dropcap] Go Go as the mascot of Go, the top trending programming language. All you need is a blue mouse costume. Fear not if you don’t have one. Just wear a powder blue jumpsuit, grab a baby pink nose, and clip on a fake single, large front tooth. Ready for the party! [dropcap]34[/dropcap] Octave Go as a numerically competent programming language. And if that doesn’t sound very trendy, go as piano keys depicting an octave. You simply need to wear all white and divide your space into 8 sections. Then draw 5 horizontal black stripes. You won’t be able to do that vertically, well, because they are a big number. Here you go, you’re all set to fill the party with your melody. Fancy an AI system inspired Halloween costume? This is for you if you love the way AI works and the enigma that it has thrown around the world. This is for you if you are spellbound with AI magic. You should go dressed as one of these at your Halloween party this season. Just pick up the AI you want to look like and follow as advised. [dropcap]35[/dropcap] IBM Watson Wear a dark blue hat, a matching long overcoat, a vest and a pale blue shirt with a dark tie tucked into the vest. Complement it with a mustache and a brooding look. You are now ready to be IBM Watson at your Halloween party. [dropcap]36[/dropcap] Apple Siri If you want to be all cool and sophisticated like the Apple’s Siri, wear an alluring black turtleneck dress. Don’t forget to carry your latest iPhone and air pods. Be sure you don’t have a sore throat, in case someone needs your assistance. [dropcap]37[/dropcap] Microsoft Cortana If Microsoft Cortana is your choice of voice assistant, dress up as Cortana, the fictional synthetic intelligence character in the Halo video game series. Wear a blue bodysuit. Get a bob if you’re daring. (A wig would also do). Paint some dark blue robot like designs over your body and well, your face. And you’re all set. [dropcap]38[/dropcap] Salesforce Einstein Dress up as the world’s most famous physicist and also an AI-powered CRM. How? Just grab a white shirt, a blue pullover and a blue tie (Salesforce colors). Finish your look with a brown tweed coat, brown pants and shoes, a rugged white wig and mustache, and a deep thought on your face. [dropcap]39[/dropcap] Facebook Jarvis Get inspired by the Iron man’s Jarvis, the coolest A.I. in the Marvel universe. Just grab a plexiglass, draw some holograms and technological symbols over it with a neon marker. (Try to keep the color palette in shades of blues and reds). And fix this plexiglass in a curved fashion in front of your face by a headband. Do practice saying “Hello Mr. Stark.”  [dropcap]40[/dropcap] Amazon Echo This is also an easy one. Grab a long, black chart paper. Roll it around in a tube form around your body. Draw the Amazon symbol at the bottom with some glittery, silver sketch pen, color your hair blue, and there you go. If you have a girlfriend, convince her to go as Amazon Alexa. [dropcap]41[/dropcap] SAP Leonardo Put on a hat, wear a long cloak, some fake overgrown mustache, and beard. Accessorize with a color palette and a paintbrush. You will be the Leonardo da Vinci of the Halloween party. Wait a minute, don’t forget to cut out SAP initials and stick them on your cap. After all, you are entering as SAP’s very own digital revolution system. [dropcap]42[/dropcap] Intel Neon Deck the Halloween hall with a Harley Quinn costume. For some extra dramatization, roll up some neon blue lights around your head. Create an Intel logo out of some blue neon lights and wear it as your neckpiece. [dropcap]43[/dropcap] Microsoft Brainwave This one will require a DIY task. Arrange for a red and green t-shirt, cut them into a vertical half. Stitch it in such a way that the green is on the left and the red on the right. Similarly, do that with your blue and yellow pants; with yellow on the left and blue on the right. You will look like the most powerful Microsoft’s logo. Wear a skullcap with wires protruding out and a Hololens like eyewear to go with. And so, you are all ready to enter the Halloween party as Microsoft’s deep learning acceleration platform for real-time AI. [dropcap]44[/dropcap] Sophia, the humanoid Enter with all the confidence and a top-to-toe professional attire. Be ready to answer any question thrown at you with grace and without a stroke of skepticism. And to top it off, sport a clean shaved head. And there, you are all ready to blow off everyone’s mind with a mix of beauty with super intelligent brains.   Happy Halloween folks!
Read more
  • 0
  • 0
  • 29251

article-image-is-golang-truly-community-driven-and-does-it-really-matter
Sugandha Lahoti
24 May 2019
6 min read
Save for later

Is Golang truly community driven and does it really matter?

Sugandha Lahoti
24 May 2019
6 min read
Golang, also called Go, is a statically typed, compiled programming language designed by Google. Golang is going from strength to strength, as more engineers than ever are using it at work, according to Go User Survey 2019. An opinion that has led to the Hacker News community into a heated debate last week: “Go is Google's language, not the community's”. The thread was first started by Chris Siebenmann who works at the Department of Computer Science, University of Toronto. His blog post reads, “Go has community contributions but it is not a community project. It is Google's project.” Chris explicitly states that the community's voice doesn't matter very much for Go's development, and we have to live with that. He argues that Google is the gatekeeper for community contributions; it alone decides what is and isn't accepted into Go. If a developer wants some significant feature to be accepted into Golang, working to build consensus in the community is far less important than persuading the Golang core team. He then cites the example of how one member of Google's Go core team discarded the entire Go Modules system that the Go community had been working on and brought in a relatively radically different model. Chris believes that the Golang team cares about the community and want them to be involved, but only up to a certain point. He wants the Go core team to be bluntly honest about the situation, rather than pretend and implicitly lead people on. He further adds, “Only if Go core team members start leaving Google and try to remain active in determining Go's direction, can we [be] certain Golang is a community-driven language.” He then compares Go with C++, calling the latter a genuine community-driven language. He says there are several major implementations in C++ which are genuine community projects, and the direction of C++ is set by an open standards committee with a relatively distributed membership. https://twitter.com/thatcks/status/1131319904039309312 What is better - community-driven or corporate ownership? There has been an opinion floating around developers about how some open source programming projects are just commercial projects driven mainly by a single company.  If we look at the top open source projects, most of them have some kind of corporate backing ( Apple’s Swift, Oracle’s Java, MySQL, Microsoft’s Typescript, Google’s Kotlin, Golang, Android, MongoDB, Elasticsearch) to name a few. Which brings us to the question, what does corporate ownership of open source projects really mean? A benevolent dictatorship can have two outcomes. If the community for a particular project suggests a change, and in case a change is a bad idea, the corporate team can intervene and stop changes. On the other hand, though, it can actually stop good ideas from the community in being implemented, even if a handful of members from the core team disagree. Chris’s post has received a lot of attention by developers on Hacker News who both sided with and disagreed with the opinion put forward. A comment reads, “It's important to have a community and to work with it, but, especially for a programming language, there has to be a clear concept of which features should be implemented and which not - just accepting community contributions for the sake of making the community feel good would be the wrong way.” Another comment reads, “Many like Go because it is an opinionated language. I'm not sure that a 'community' run language will create something like that because there are too many opinions. Many claims to represent the community, but not the community that doesn't share their opinion. Without clear leaders, I fear technical direction and taste will be about politics which seems more uncertain/risky. I like that there is a tight cohesive group in control over Go and that they are largely the original designers. I might be more interested in alternative government structures and Google having too much control only if those original authors all stepped down.” Rather than splitting between Community or Corporate, a more accurate representation would be how much market value is depending on those projects. If a project is thriving, usually enterprises will take good decisions to handle it. However, another but entirely valid and important question to ask is ‘should open source projects be driven by their market value?’ Another common argument is that the core team’s full-time job is to take care of the language instead of taking errant decisions based on community backlash. Google (or Microsoft, or Apple, or Facebook for that matter) will not make or block a change in a way that kills an entire project. But this does not mean they should sit idly, ignoring the community response. Ideally, the more that a project genuinely belongs to its community, the more it will reflect what the community wants and needs. Google also has a propensity to kill its own products. What happens when Google is not as interested in Golang anymore? The company could leave it to the community to figure out the governance model suddenly by pulling off the original authors into some other exciting new project. Or they may let the authors only work on Golang in their spare time at home or at the weekends. While Google's history shows that many of their dead products are actually an important step towards something better and more successful, why and how much of that logic would be directly relevant to an open source project is something worth thinking about. As a Hacker news user wrote, “Go is developed by Bell Labs people, the same people who bought us C, Unix and Plan 9 (Ken, Pike, RSC, et al). They took the time to think through all their decisions, the impacts of said decisions, along with keeping things as simple as possible. Basically, doing things right the first time and not bolting on features simply because the community wants them.” Another says, “The way how Golang team handles potentially tectonic changes in language is also exemplary – very well communicated ideas, means to provide feedback and clear explanation of how the process works.” Rest assured, if any major change is made to Go, even a drastic one such as killing it, it will not be done without consulting the community and taking their feedback. Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch State of Go February 2019 – Golang developments report for this month released
Read more
  • 0
  • 0
  • 29244

article-image-firefox-70-released-with-better-security-css-and-javascript-improvements
Savia Lobo
23 Oct 2019
6 min read
Save for later

Firefox 70 released with better security, CSS, and JavaScript improvements

Savia Lobo
23 Oct 2019
6 min read
Mozilla team announced the much-awaited release of Firefox 70 yesterday with amazing new features like secure password generation with Lockwise and the new Firefox Privacy Protection Report. Firefox 70 also includes a plethora of additions for developers such as DOM mutation breakpoints and inactive CSS rule indicators in the DevTools, several new CSS text properties, two-value display syntax, and JS numeric separators, and much more. Firefox 70 centers around enhanced privacy and security The new Firefox 70 includes an Enhanced Tracking Protection, which includes a Firefox Privacy Protection Report that gives additional details and more visibility into how you’re being tracked online so you can better combat it. The Enhanced Tracking Protection was set up as default by the browser in September this year. The report highlights how ETP prevents third-party trackers from building a user’s profile based on their online activity. The report also includes the number of cross-site and social media trackers, finger-printers and crypto-miners Mozilla blocked. The report also helps users to keep themselves updated with Firefox Monitor and Firefox Lockwise. Firefox Monitor helps users to get a summary of the number of unsafe passwords that may have been used in a breach so that you can take action to update and change those passwords. Firefox Lockwise helps users to manage passwords and different synced devices. Firefox Lockwise includes a button where users can click to view their logins and updates. They can also have the ability to quickly view and manage how many devices they syncing and sharing passwords with. To know more about security in Firefox 70, read Mozilla’s blog. What’s new in Firefox 70 Updated HTML forms and secure passwords To generate secure passwords, the team has updated HTML input elements. Here, any input element of type password will have an option to generate a secure password available in the context menu, which can then be stored in Lockwise. In addition, any type="password" field with autocomplete=”new-password” set on it will have an autocomplete UI to generate a new password in-context. New CSS improvements Firefox 70 includes some CSS improvements like new options for styling underlines and new set of two-keyword values. Options for styling underlines include three new properties for text-decoration (underline): text-decoration-thickness: sets the thickness of lines added via text-decoration. text-underline-offset: sets the distance between a text-decoration and the text it is set on. Bear in mind that this only works on underlines. text-decoration-skip-ink: sets whether underlines and overlines are drawn if they cross descenders and ascenders. The default value, auto, causes them to only be drawn where they do not cross over a glyph. To allow underlines to cross glyphs, set the value to none. Two-keyword display values Until now, the display property has taken a single value. However, the team says that “the boxes on a page have an outer display type, which determines how the box is laid out in relation to other boxes on the page, and an inner display type, which determines how the box’s children will behave.” The two-keyword values allow you to explicitly specify the outer and inner display values. In supporting browsers (which currently includes only Firefox), the single keyword values will map to new two-keyword values, for example: display: flex; is equivalent to display: block flex; display: inline-flex; is equivalent to display: inline flex; JavaScript improvements Firefox 70 now supports numeric separators for JavaScript. Underscores can now be used as separators in large numbers so that they are more readable. Other improvements in JavaScript include: Intl improvements Firefox 70 includes improved JavaScript i18n (internationalization), starting with the implementation of the Intl.RelativeTimeFormat.formatToParts() method. This is a special version of Intl.RelativeTimeFormat.format() that returns an array of objects, each one representing a part of the value, rather than returning a string of the localized time value. Also,  Intl.NumberFormat.format() and Intl.NumberFormat.formatToParts() now accept BigInt values. Performance Improvements The inclusion of the new baseline interpreter has speeded up JavaScript. The code for the new interpreter includes shared code from the existing Baseline JIT. You can read more about the interpreter on The Baseline Interpreter: a faster JS interpreter in Firefox 70. New Developer tools The Developer Tools Accessibility panel now includes an audit for keyboard accessibility and a color deficiency simulator for systems with WebRender enabled. Pause option in DOM Mutation in Debugger DOM Mutation Breakpoints (aka DOM Change Breakpoints) let you pause scripts that add, remove, or change specific elements. Once a DOM mutation breakpoint is set, you’ll see it listed under “DOM Mutation Breakpoints” in the right-hand pane of the Debugger; this is also where you’ll see breaks reported. Source: Mozilla Hacks Color contrast information in the color picker! In the CSS Rules view, you can click foreground colors with the color picker to determine if their contrast with the background color meets accessibility guidelines. Accessibility inspector: keyboard checks The Accessibility inspector‘s Check for issues dropdown now includes keyboard accessibility checks: Selecting this option causes Firefox to go through each node in the accessibility tree and highlight all that have a keyboard accessibility issue: Hovering over or clicking each one will reveal information about what the issue is, along with a “Learn more” link for more details on how to fix it. Web socket inspector In Firefox DevEdition, the Network monitor now has a new “Messages” panel, which appears when you are monitoring a web socket connection (i.e. a 101 response). This can be used to inspect web socket frames sent and received through the connection. This functionality was originally supposed to be in Firefox 70 general release, but the team had a few more bugs to resolve, so expect it in Firefox 71! For now, users can explore it in the DevEdition. Fixed issues in Firefox 70 Built-in Firefox pages now follow the system dark mode preference Aliased theme properties have been removed, which may affect some themes Passwords can now be imported from Chrome on macOS in addition to existing support for Windows Readability is now greatly improved on under- or overlined texts, including links. The lines will now be interrupted instead of crossing over a glyph. Improved privacy and security indicators A new crossed-out lock icon will indicate sites delivered via insecure HTTP The formerly green lock icon is now grey The Extended Validation (EV) indicator has been moved to the identity popup that appears when clicking the lock icon To know more about other improvements and bug fixes in Firefox 70 in detail read Mozilla’s official blog. Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70 Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Mozilla Thunderbird 78 will include OpenPGP support, expected to be released by Summer 2020
Read more
  • 0
  • 0
  • 29223

article-image-bsp-layer
Packt
02 Apr 2015
14 min read
Save for later

The BSP Layer

Packt
02 Apr 2015
14 min read
In this article by Alex González, author of the book Embedded LinuxProjects Using Yocto Project Cookbook, we will see how the embedded Linux projects require both custom hardware and software. An early task in the development process is to test different hardware reference boards and the selection of one to base our design on. We have chosen the Wandboard, a Freescale i.MX6-based platform, as it is an affordable and open board, which makes it perfect for our needs. On an embedded project, it is usually a good idea to start working on the software as soon as possible, probably before the hardware prototypes are ready, so that it is possible to start working directly with the reference design. But at some point, the hardware prototypes will be ready and changes will need to be introduced into Yocto to support the new hardware. This article will explain how to create a BSP layer to contain those hardware-specific changes, as well as show how to work with the U-Boot bootloader and the Linux kernel, components which are likely to take most of the customization work. (For more resources related to this topic, see here.) Creating a custom BSP layer These custom changes are kept on a separate Yocto layer, called a Board Support Package (BSP) layer. This separation is best for future updates and patches to the system. A BSP layer can support any number of new machines and any new software feature that is linked to the hardware itself. How to do it... By convention, Yocto layer names start with meta, short for metadata. A BSP layer may then add a bsp keyword, and finally a unique name. We will call our layer meta-bsp-custom. There are several ways to create a new layer: Manually, once you know what is required By copying the meta-skeleton layer included in Poky By using the yocto-layer command-line tool You can have a look at the meta-skeleton layer in Poky and see that it includes the following elements: A layer.conf file, where the layer configuration variables are set A COPYING.MIT license file Several directories named with the recipes prefix with example recipes for BusyBox, the Linux kernel and an example module, an example service recipe, an example user management recipe, and a multilib example. How it works... We will cover some of the use cases that appear in the available examples, so for our needs, we will use the yocto-layer tool, which allows us to create a minimal layer. Open a new terminal and change to the fsl-community-bsp directory. Then set up the environment as follows: $ source setup-environment wandboard-quad Note that once the build directory has been created, the MACHINE variable has already been configured in the conf/local.conf file and can be omitted from the command line. Change to the sources directory and run: $ yocto-layer create bsp-custom Note that the yocto-layer tool will add the meta prefix to your layer, so you don't need to. It will prompt a few questions: The layer priority which is used to decide the layer precedence in cases where the same recipe (with the same name) exists in several layers simultaneously. It is also used to decide in what order bbappends are applied if several layers append the same recipe. Leave the default value of 6. This will be stored in the layer's conf/layer.conf file as BBFILE_PRIORITY. Whether to create example recipes and append files. Let's leave the default no for the time being. Our new layer has the following structure: meta-bsp-custom/    conf/layer.conf    COPYING.MIT    README There's more... The first thing to do is to add this new layer to your project's conf/bblayer.conf file. It is a good idea to add it to your template conf directory's bblayers.conf.sample file too, so that it is correctly appended when creating new projects. The highlighted line in the following code shows the addition of the layer to the conf/bblayers.conf file: LCONF_VERSION = "6"   BBPATH = "${TOPDIR}" BSPDIR := "${@os.path.abspath(os.path.dirname(d.getVar('FILE', "" True)) + '/../..')}"   BBFILES ?= "" BBLAYERS = " ${BSPDIR}/sources/poky/meta ${BSPDIR}/sources/poky/meta-yocto ${BSPDIR}/sources/meta-openembedded/meta-oe ${BSPDIR}/sources/meta-openembedded/meta-multimedia ${BSPDIR}/sources/meta-fsl-arm ${BSPDIR}/sources/meta-fsl-arm-extra ${BSPDIR}/sources/meta-fsl-demos ${BSPDIR}/sources/meta-bsp-custom " Now, BitBake will parse the bblayers.conf file and find the conf/layers.conf file from your layer. In it, we find the following line: BBFILES += "${LAYERDIR}/recipes-*/*/*.bb        ${LAYERDIR}/recipes-*/*/*.bbappend" It tells BitBake which directories to parse for recipes and append files. You need to make sure your directory and file hierarchy in this new layer matches the given pattern, or you will need to modify it. BitBake will also find the following: BBPATH .= ":${LAYERDIR}" The BBPATH variable is used to locate the bbclass files and the configuration and files included with the include and require directives. The search finishes with the first match, so it is best to keep filenames unique. Some other variables we might consider defining in our conf/layer.conf file are: LAYERDEPENDS_bsp-custom = "fsl-arm" LAYERVERSION_bsp-custom = "1" The LAYERDEPENDS literal is a space-separated list of other layers your layer depends on, and the LAYERVERSION literal specifies the version of your layer in case other layers want to add a dependency to a specific version. The COPYING.MIT file specifies the license for the metadata contained in the layer. The Yocto project is licensed under the MIT license, which is also compatible with the General Public License (GPL). This license applies only to the metadata, as every package included in your build will have its own license. The README file will need to be modified for your specific layer. It is usual to describe the layer and provide any other layer dependencies and usage instructions. Adding a new machine When customizing your BSP, it is usually a good idea to introduce a new machine for your hardware. These are kept under the conf/machine directory in your BSP layer. The usual thing to do is to base it on the reference design. For example, wandboard-quad has the following machine configuration file: include include/wandboard.inc   SOC_FAMILY = "mx6:mx6q:wandboard"   UBOOT_MACHINE = "wandboard_quad_config"   KERNEL_DEVICETREE = "imx6q-wandboard.dtb"   MACHINE_FEATURES += "bluetooth wifi"   MACHINE_EXTRA_RRECOMMENDS += " bcm4329-nvram-config bcm4330-nvram-config " A machine based on the Wandboard design could define its own machine configuration file, wandboard-quad-custom.conf, as follows: include conf/machine/include/wandboard.inc   SOC_FAMILY = "mx6:mx6q:wandboard"   UBOOT_MACHINE = "wandboard_quad_custom_config"   KERNEL_DEVICETREE = "imx6q-wandboard-custom.dtb"   MACHINE_FEATURES += "wifi" The wandboard.inc file now resides on a different layer, so in order for BitBake to find it, we need to specify the full path from the BBPATH variable in the corresponding layer. This machine defines its own U-Boot configuration file and Linux kernel device tree in addition to defining its own set of machine features. Adding a custom device tree to the Linux kernel To add this device tree file to the Linux kernel, we need to add the device tree file to the arch/arm/boot/dts directory under the Linux kernel source and also modify the Linux build system's arch/arm/boot/dts/Makefile file to build it as follows: dtb-$(CONFIG_ARCH_MXC) += +imx6q-wandboard-custom.dtb This code uses diff formatting, where the lines with a minus prefix are removed, the ones with a plus sign are added, and the ones without a prefix are left as reference. Once the patch is prepared, it can be added to the meta-bsp-custom/recipes-kernel/linux/linux-wandboard-3.10.17/ directory and the Linux kernel recipe appended adding a meta-bsp-custom/recipes-kernel/linux/linux-wandboard_3.10.17.bbappend file with the following content: SRC_URI_append = " file://0001-ARM-dts-Add-wandboard-custom-dts- "" file.patch" Adding a custom U-Boot machine In the same way, the U-Boot source may be patched to add a new custom machine. Bootloader modifications are not as likely to be needed as kernel modifications though, and most custom platforms will leave the bootloader unchanged. The patch would be added to the meta-bsp-custom/recipes-bsp/u-boot/u-boot-fslc-v2014.10/ directory and the U-Boot recipe appended with a meta-bsp-custom/recipes-bsp/u-boot/u-boot-fslc_2014.10.bbappend file with the following content: SRC_URI_append = " file://0001-boards-Add-wandboard-custom.patch" Adding a custom formfactor file Custom platforms can also define their own formfactor file with information that the build system cannot obtain from other sources, such as defining whether a touchscreen is available or defining the screen orientation. These are defined in the recipes-bsp/formfactor/ directory in our meta-bsp-custom layer. For our new machine, we could define a meta-bsp-custom/recipes-bsp/formfactor/formfactor_0.0.bbappend file to include a formfactor file as follows: FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" And the machine-specific meta-bsp-custom/recipes-bsp/formfactor/formfactor/wandboard-quadcustom/machconfig file would be as follows: HAVE_TOUCHSCREEN=1 Debugging the Linux kernel booting process We have seen the most general techniques for debugging the Linux kernel. However, some special scenarios require the use of different methods. One of the most common scenarios in embedded Linux development is the debugging of the booting process. This section will explain some of the techniques used to debug the kernel's booting process. How to do it... A kernel crashing on boot usually provides no output whatsoever on the console. As daunting as that may seem, there are techniques we can use to extract debug information. Early crashes usually happen before the serial console has been initialized, so even if there were log messages, we would not see them. The first thing we will show is how to enable early log messages that do not need the serial driver. In case that is not enough, we will also show techniques to access the log buffer in memory. How it works... Debugging booting problems have two distinctive phases, before and after the serial console is initialized. After the serial is initialized and we can see serial output from the kernel, debugging can use the techniques described earlier. Before the serial is initialized, however, there is a basic UART support in ARM kernels that allows you to use the serial from early boot. This support is compiled in with the CONFIG_DEBUG_LL configuration variable. This adds supports for a debug-only series of assembly functions that allow you to output data to a UART. The low-level support is platform specific, and for the i.MX6, it can be found under arch/arm/include/debug/imx.S. The code allows for this low-level UART to be configured through the CONFIG_DEBUG_IMX_UART_PORT configuration variable. We can use this support directly by using the printascii function as follows: extern void printascii(const char *); printascii("Literal stringn"); However, much more preferred would be to use the early_print function, which makes use of the function explained previously and accepts formatted input in printf style; for example: early_print("%08xt%sn", p->nr, p->name); Dumping the kernel's printk buffer from the bootloader Another useful technique to debug Linux kernel crashes at boot is to analyze the kernel log after the crash. This is only possible if the RAM memory is persistent across reboots and does not get initialized by the bootloader. As U-Boot keeps the memory intact, we can use this method to peek at the kernel login memory in search of clues. Looking at the kernel source, we can see how the log ring buffer is set up in kernel/printk/printk.c and also note that it is stored in __log_buf. To find the location of the kernel buffer, we will use the System.map file created by the Linux build process, which maps symbols with virtual addresses using the following command: $grep __log_buf System.map 80f450c0 b __log_buf To convert the virtual address to physical address, we look at how __virt_to_phys() is defined for ARM: x - PAGE_OFFSET + PHYS_OFFSET The PAGE_OFFSET variable is defined in the kernel configuration as: config PAGE_OFFSET        hex        default 0x40000000 if VMSPLIT_1G        default 0x80000000 if VMSPLIT_2G        default 0xC0000000 Some of the ARM platforms, like the i.MX6, will dynamically patch the __virt_to_phys() translation at runtime, so PHYS_OFFSET will depend on where the kernel is loaded into memory. As this can vary, the calculation we just saw is platform specific. For the Wandboard, the physical address for 0x80f450c0 is 0x10f450c0. We can then force a reboot using a magic SysRq key, which needs to be enabled in the kernel configuration with CONFIG_MAGIC_SYSRQ, but is enabled in the Wandboard by default: $ echo b > /proc/sysrq-trigger We then dump that memory address from U-Boot as follows: > md.l 0x10f450c0 10f450c0: 00000000 00000000 00210038 c6000000   ........8.!..... 10f450d0: 746f6f42 20676e69 756e694c 6e6f2078   Booting Linux on 10f450e0: 79687020 61636973 5043206c 78302055     physical CPU 0x 10f450f0: 00000030 00000000 00000000 00000000   0............... 10f45100: 009600a8 a6000000 756e694c 65762078   ........Linux ve 10f45110: 6f697372 2e33206e 312e3031 2e312d37   rsion 3.10.17-1. 10f45120: 2d322e30 646e6177 72616f62 62672b64   0.2-wandboard+gb 10f45130: 36643865 62323738 20626535 656c6128   e8d6872b5eb (ale 10f45140: 6f6c4078 696c2d67 2d78756e 612d7068   x@log-linux-hp-a 10f45150: 7a6e6f67 20296c61 63636728 72657620   gonzal) (gcc ver 10f45160: 6e6f6973 392e3420 2820312e 29434347   sion 4.9.1 (GCC) 10f45170: 23202920 4d532031 52502050 504d4545     ) #1 SMP PREEMP 10f45180: 75532054 6546206e 35312062 3a323120   T Sun Feb 15 12: 10f45190: 333a3733 45432037 30322054 00003531   37:37 CET 2015.. 10f451a0: 00000000 00000000 00400050 82000000   ........P.@..... 10f451b0: 3a555043 4d524120 50203776 65636f72   CPU: ARMv7 Proce There's more... Another method is to store the kernel log messages and kernel panics or oops into persistent storage. The Linux kernel's persistent store support (CONFIG_PSTORE) allows you to log in to the persistent memory kept across reboots. To log panic and oops messages into persistent memory, we need to configure the kernel with the CONFIG_PSTORE_RAM configuration variable, and to log kernel messages, we need to configure the kernel with CONFIG_PSTORE_CONSOLE. We then need to configure the location of the persistent storage on an unused memory location, but keep the last 1 MB of memory free. For example, we could pass the following kernel command-line arguments to reserve a 128 KB region starting at 0x30000000: ramoops.mem_address=0x30000000 ramoops.mem_size=0x200000 We would then mount the persistent storage by adding it to /etc/fstab so that it is available on the next boot as well: /etc/fstab: pstore /pstore pstore defaults 0 0 We then mount it as follows: # mkdir /pstore # mount /pstore Next, we force a reboot with the magic SysRq key: # echo b > /proc/sysrq-trigger On reboot, we will see a file inside /pstore: -r--r--r-- 1 root root 4084 Sep 16 16:24 console-ramoops This will have contents such as the following: SysRq : Resetting CPU3: stopping CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.14.0-rc4-1.0.0-wandboard-37774-g1eae [<80014a30>] (unwind_backtrace) from [<800116cc>] (show_stack+0x10/0x14) [<800116cc>] (show_stack) from [<806091f4>] (dump_stack+0x7c/0xbc) [<806091f4>] (dump_stack) from [<80013990>] (handle_IPI+0x144/0x158) [<80013990>] (handle_IPI) from [<800085c4>] (gic_handle_irq+0x58/0x5c) [<800085c4>] (gic_handle_irq) from [<80012200>] (__irq_svc+0x40/0x70) Exception stack(0xee4c1f50 to 0xee4c1f98) We should move it out of /pstore or remove it completely so that it doesn't occupy memory. Summary This article guides you through the customization of the BSP for your own product. It then explains how to debug the Linux kernel booting process. Resources for Article: Further resources on this subject: Baking Bits with Yocto Project [article] An Introduction to the Terminal [article] Linux Shell Scripting – various recipes to help you [article]
Read more
  • 0
  • 0
  • 29199
article-image-tic-tac-toe-game-in-asp-net-core-tutorial
Aaron Lazar
13 Aug 2018
28 min read
Save for later

Building a Tic-tac-toe game in ASP.Net Core 2.0 [Tutorial]

Aaron Lazar
13 Aug 2018
28 min read
Learning is more fun if we do it while making games. With this thought, let's continue our quest to learn .NET Core 2.0 by writing a Tic-tac-toe game in .NET Core 2.0. We will develop the game in the ASP.NET Core 2.0 web app, using SignalR Core. We will follow a step-by-step approach and use Visual Studio 2017 as the primary IDE, but will list the steps needed while using the Visual Studio Code editor as well. Let's do the project setup first and then we will dive into the coding. This tutorial has been extracted from the book .NET Core 2.0 By Example, by Rishabh Verma and Neha Shrivastava. Installing SignalR Core NuGet package Create a new ASP.NET Core 2.0 MVC app named TicTacToeGame. With this, we will have a basic working ASP.NET Core 2.0 MVC app in place. However, to leverage SignalR Core in our app, we need to install SignalR Core NuGet and the client packages. To install the SignalR Core NuGet package, we can perform one of the following two approaches in the Visual Studio IDE: In the context menu of the TicTacToeGame project, click on Manage NuGet Packages. It will open the NuGet Package Manager for the project. In the Browse section, search for the Microsoft.AspNetCore.SignalR package and click Install. This will install SignalR Core in the app. Please note that currently the package is in the preview stage and hence the pre-release checkbox has to be ticked: Edit the TicTacToeGame.csproj file, add the following code snippet in the ItemGroup code containing package references, and click Save. As soon as the file is saved, the tooling will take care of restoring the packages and in a while, the SignalR package will be installed. This approach can be used with Visual Studio Code as well. Although Visual Studio Code detects the unresolved dependencies and may prompt you to restore the package, it is recommended that immediately after editing and saving the file, you run the dotnet restore command in the terminal window at the location of the project: <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /> <PackageReference Include="Microsoft.AspNetCore.SignalR" Version="1.0.0-alpha1-final" /> </ItemGroup> Now we have server-side packages installed. We still need to install the client-side package of SignalR, which is available through npm. To do so, we need to first ascertain whether we have npm installed on the machine or not. If not, we need to install it. npm is distributed with Node.js, so we need to download and install Node.js from https://nodejs.org/en/. The installation is quite straightforward. Once this installation is done, open a Command Prompt at the project location and run the following command: npm install @aspnet/signalr-client This will install the SignalR client package. Just go to the package location (npm creates a node_modules folder in the project directory). The relative path from the project directory would be \node_modules\@aspnet\signalr-client\dist\browser. From this location, copy the signalr-client-1.0.0-alpha1-final.js file into the wwwroot\js folder. In the current version, the name is signalr-client-1.0.0-alpha1-final.js. With this, we are done with the project setup and we are ready to use SignalR goodness as well. So let's dive into the coding. Coding the game In this section, we will implement our gaming solution. The end output will be the working two-player Tic-Tac-Toe game. We will do the coding in steps for ease of understanding:  In the Startup class, we modify the ConfigureServices method to add SignalR to the container, by writing the following code: //// Adds SignalR to the services container. services.AddSignalR(); In the Configure method of the same class, we configure the pipeline to use SignalR and intercept and wire up the request containing gameHub to our SignalR hub that we will be creating with the following code: //// Use - SignalR & let it know to intercept and map any request having gameHub. app.UseSignalR(routes => { routes.MapHub<GameHub>("gameHub"); }); The following is the code for both methods, for the sake of clarity and completion. Other methods and properties are removed for brevity: // This method gets called by the run-time. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { services.AddMvc(); //// Adds SignalR to the services container. services.AddSignalR(); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseBrowserLink(); } else { app.UseExceptionHandler("/Home/Error"); } app.UseStaticFiles(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); //// Use - SignalR & let it know to intercept and map any request having gameHub. app.UseSignalR(routes => { routes.MapHub<GameHub>("gameHub"); }); The previous two steps set up SignalR for us. Now, let's start with the coding of the player registration form. We want the player to be registered with a name and display the picture. Later, the server will also need to know whether the player is playing, waiting for a move, searching for an opponent, and so on. Let's create the Player model in the Models folder in the app. The code comments are self-explanatory: /// <summary> /// The player class. Each player of Tic-Tac-Toe game would be an instance of this class. /// </summary> internal class Player { /// <summary> /// Gets or sets the name of the player. This would be set at the time user registers. /// </summary> public string Name { get; set; } /// <summary> /// Gets or sets the opponent player. The player against whom the player would be playing. /// This is determined/ set when the players click Find Opponent Button in the UI. /// </summary> public Player Opponent { get; set; } /// <summary> /// Gets or sets a value indicating whether the player is playing. /// This is set when the player starts a game. /// </summary> public bool IsPlaying { get; set; } /// <summary> /// Gets or sets a value indicating whether the player is waiting for opponent to make a move. /// </summary> public bool WaitingForMove { get; set; } /// <summary> /// Gets or sets a value indicating whether the player is searching for opponent. /// </summary> public bool IsSearchingOpponent { get; set; } /// <summary> /// Gets or sets the time when the player registered. /// </summary> public DateTime RegisterTime { get; set; } /// <summary> /// Gets or sets the image of the player. /// This would be set at the time of registration, if the user selects the image. /// </summary> public string Image { get; set; } /// <summary> /// Gets or sets the connection id of the player connection with the gameHub. /// </summary> public string ConnectionId { get; set; } } Now, we need to have a UI in place so that the player can fill in the form and register. We also need to show the image preview to the player when he/she browses the image. To do so, we will use the Index.cshtml view of the HomeController class that comes with the default MVC template. We will refer to the following two .js files in the _Layout.cshtml partial view so that they are available to all the views. Alternatively, you could add these in the Index.cshtml view as well, but its highly recommended that common scripts should be added in _Layout.cshtml. The version of the script file may be different in your case. These are the currently available latest versions. Although jQuery is not required to be the library of choice for us, we will use jQuery to keep the code clean, simple, and compact. With these references, we have jQuery and SignalR available to us on the client side: <script src="~/lib/jquery/dist/jquery.js"></script> <!-- jQuery--> <script src="~/js/signalr-client-1.0.0-alpha1-final.js"></script> <!-- SignalR--> After adding these references, create the simple HTML UI for the image preview and registration, as follows: <div id="divPreviewImage"> <!-- To display the browsed image--> <fieldset> <div class="form-group"> <div class="col-lg-2"> <image src="" id="previewImage" style="height:100px;width:100px;border:solid 2px dotted; float:left" /> </div> <div class="col-lg-10" id="divOpponentPlayer"> <!-- To display image of opponent player--> <image src="" id="opponentImage" style="height:100px;width:100px;border:solid 2px dotted; float:right;" /> </div> </div> </fieldset> </div> <div id="divRegister"> <!-- Our Registration form--> <fieldset> <legend>Register</legend> <div class="form-group"> <label for="name" class="col-lg-2 control- label">Name</label> <div class="col-lg-10"> <input type="text" class="form-control" id="name" placeholder="Name"> </div> </div> <div class="form-group"> <label for="image" class="col-lg-2 control- label">Avatar</label> <div class="col-lg-10"> <input type="file" class="form-control" id="image" /> </div> </div> <div class="form-group"> <div class="col-lg-10 col-lg-offset-2"> <button type="button" class="btn btn-primary" id="btnRegister">Register</button> </div> </div> </fieldset> </div> When the player registers by clicking the Register button, the player's details need to be sent to the server. To do this, we will write the JavaScript to send details to our gameHub: let hubUrl = '/gameHub'; let httpConnection = new signalR.HttpConnection(hubUrl); let hubConnection = new signalR.HubConnection(httpConnection); var playerName = ""; var playerImage = ""; var hash = "#"; hubConnection.start(); $("#btnRegister").click(function () { //// Fires on button click playerName = $('#name').val(); //// Sets the player name with the input name. playerImage = $('#previewImage').attr('src'); //// Sets the player image variable with specified image var data = playerName.concat(hash, playerImage); //// The registration data to be sent to server. hubConnection.invoke('RegisterPlayer', data); //// Invoke the "RegisterPlayer" method on gameHub. }); $("#image").change(function () { //// Fires when image is changed. readURL(this); //// HTML 5 way to read the image as data url. }); function readURL(input) { if (input.files && input.files[0]) { //// Go in only if image is specified. var reader = new FileReader(); reader.onload = imageIsLoaded; reader.readAsDataURL(input.files[0]); } } function imageIsLoaded(e) { if (e.target.result) { $('#previewImage').attr('src', e.target.result); //// Sets the image source for preview. $("#divPreviewImage").show(); } }; The player now has a UI to input the name and image, see the preview image, and click Register. On clicking the Register button, we are sending the concatenated name and image to the gameHub on the server through hubConnection.invoke('RegisterPlayer', data);  So, it's quite simple for the client to make a call to the server. Initialize the hubConnection by specifying hub name as we did in the first three lines of the preceding code snippet. Start the connection by hubConnection.start();, and then invoke the server hub method by calling the invoke method, specifying the hub method name and the parameter it expects. We have not yet created the hub, so let's create the GameHub class on the server: /// <summary> /// The Game Hub class derived from Hub /// </summary> public class GameHub : Hub { /// <summary> /// To keep the list of all the connected players registered with the game hub. We could have /// used normal list but used concurrent bag as its thread safe. /// </summary> private static readonly ConcurrentBag<Player> players = new ConcurrentBag<Player>(); /// <summary> /// Registers the player with name and image. /// </summary> /// <param name="nameAndImageData">The name and image data sent by the player.</param> public void RegisterPlayer(string nameAndImageData) { var splitData = nameAndImageData?.Split(new char[] { '#' }, StringSplitOptions.None); string name = splitData[0]; string image = splitData[1]; var player = players?.FirstOrDefault(x => x.ConnectionId == Context.ConnectionId); if (player == null) { player = new Player { ConnectionId = Context.ConnectionId, Name = name, IsPlaying = false, IsSearchingOpponent = false, RegisterTime = DateTime.UtcNow, Image = image }; if (!players.Any(j => j.Name == name)) { players.Add(player); } } this.OnRegisterationComplete(Context.ConnectionId); } /// <summary> /// Fires on completion of registration. /// </summary> /// <param name="connectionId">The connectionId of the player which registered</param> public void OnRegisterationComplete(string connectionId) { //// Notify this connection id that the registration is complete. this.Clients.Client(connectionId). InvokeAsync(Constants.RegistrationComplete); } } The code comments make it self-explanatory. The class should derive from the SignalR Hub class for it to be recognized as Hub. There are two methods of interest which can be overridden. Notice that both the methods follow the async pattern and hence return Task: Task OnConnectedAsync(): This method fires when a client/player connects to the hub. Task OnDisconnectedAsync(Exception exception): This method fires when a client/player disconnects or looses the connection. We will override this method to handle the scenario where the player disconnects. There are also a few properties that the hub class exposes: Context: This property is of type HubCallerContext and gives us access to the following properties: Connection: Gives access to the current connection User: Gives access to the ClaimsPrincipal of the user who is currently connected ConnectionId: Gives the current connection ID string Clients: This property is of type IHubClients and gives us the way to communicate to all the clients via the client proxy Groups: This property is of type IGroupManager and provides a way to add and remove connections to the group asynchronously To keep the things simple, we are not using a database to keep track of our registered players. Rather we will use an in-memory collection to keep the registered players. We could have used a normal list of players, such as List<Player>, but then we would need all the thread safety and use one of the thread safety primitives, such as lock, monitor, and so on, so we are going with ConcurrentBag<Player>, which is thread safe and reasonable for our game development. That explains the declaration of the players collection in the class. We will need to do some housekeeping to add players to this collection when they resister and remove them when they disconnect. We saw in previous step that the client invoked the RegisterPlayer method of the hub on the server, passing in the name and image data. So we defined a public method in our hub, named RegisterPlayer, accepting the name and image data string concatenated through #. This is just one of the simple ways of accepting the client data for demonstration purposes, we can also use strongly typed parameters. In this method, we split the string on # and extract the name as the first part and the image as the second part. We then check if the player with the current connection ID already exists in our players collection. If it doesn't, we create a Player object with default values and add them to our players collection. We are distinguishing the player based on the name for demonstration purposes, but we can add an Id property in the Player class and make different players have the same name also. After the registration is complete, the server needs to update the player, that the registration is complete and the player can then look for the opponent. To do so, we make a call to the OnRegistrationComplete method which invokes a method called  registrationComplete on the client with the current connection ID. Let's understand the code to invoke the method on the client: this.Clients.Client(connectionId).InvokeAsync(Constants.RegistrationComplete); On the Clients property, we can choose a client having a specific connection ID (in this case, the current connection ID from the Context) and then call InvokeAsync to invoke a method on the client specifying the method name and parameters as required. In the preceding case method, the name is registrationComplete with no parameters. Now we know how to invoke a server method from the client and also how to invoke the client method from the server. We also know how to select a specific client and invoke a method there. We can invoke the client method from the server, for all the clients, a group of clients, or a specific client, so rest of the coding stuff would be just a repetition of these two concepts. Next, we need to implement the registrationComplete method on the client. On registration completion, the registration form should be hidden and the player should be able to find an opponent to play against. To do so, we would write JavaScript code to hide the registration form and show the UI for finding the opponent. On clicking the Find Opponent button, we need the server to pair us against an opponent, so we need to invoke a hub method on server to find opponent. The server can respond us with two outcomes: It finds an opponent player to play against. In this case, the game can start so we need to simulate the coin toss, determine the player who can make the first move, and start the game. This would be a game board in the client-user interface. It doesn't find an opponent and asks the player to wait for another player to register and search for an opponent. This would be a no opponent found screen in the client. In both the cases, the server would do some processing and invoke a method on the client. Since we need a lot of different user interfaces for different scenarios, let's code the HTML markup inside div to make it easier to show and hide sections based on the server response. We will add the following code snippet in the body. The comments specify the purpose of each of the div elements and markup inside them: <div id="divFindOpponentPlayer"> <!-- Section to display Find Opponent --> <fieldset> <legend>Find a player to play against!</legend> <div class="form-group"> <input type="button" class="btn btn-primary" id="btnFindOpponentPlayer" value="Find Opponent Player" /> </div> </fieldset> </div> <div id="divFindingOpponentPlayer"> <!-- Section to display opponent not found, wait --> <fieldset> <legend>Its lonely here!</legend> <div class="form-group"> Looking for an opponent player. Waiting for someone to join! </div> </fieldset> </div> <div id="divGameInformation" class="form-group"> <!-- Section to display game information--> <div class="form-group" id="divGameInfo"></div> <div class="form-group" id="divInfo"></div> </div> <div id="divGame" style="clear:both"> <!-- Section where the game board would be displayed --> <fieldset> <legend>Game On</legend> <div id="divGameBoard" style="width:380px"></div> </fieldset> </div> The following client-side code would take care of Steps 7 and 8. Though the comments are self-explanatory, we will quickly see what all stuff is that is going on here. We handle the registartionComplete method and display the Find Opponent Player section. This section has a button to find an opponent player called btnFindOpponentPlayer. We define the event handler of the button to invoke the FindOpponent method on the hub. We will see the hub method implementation later, but we know that the hub method would either find an opponent or would not find an opponent, so we have defined the methods opponentFound and opponentNotFound, respectively, to handle these scenarios. In the opponentNotFound method, we just display a section in which we say, we do not have an opponent player. In the opponentFound method, we display the game section, game information section, opponent display picture section, and draw the Tic-Tac-Toe game board as a 3×3 grid using CSS styling. All the other sections are hidden: $("#btnFindOpponentPlayer").click(function () { hubConnection.invoke('FindOpponent'); }); hubConnection.on('registrationComplete', data => { //// Fires on registration complete. Invoked by server hub $("#divRegister").hide(); // hide the registration div $("#divFindOpponentPlayer").show(); // display find opponent player div. }); hubConnection.on('opponentNotFound', data => { //// Fires when no opponent is found. $('#divFindOpponentPlayer').hide(); //// hide the find opponent player section. $('#divFindingOpponentPlayer').show(); //// display the finding opponent player div. }); hubConnection.on('opponentFound', (data, image) => { //// Fires when opponent player is found. $('#divFindOpponentPlayer').hide(); $('#divFindingOpponentPlayer').hide(); $('#divGame').show(); //// Show game board section. $('#divGameInformation').show(); //// Show game information $('#divOpponentPlayer').show(); //// Show opponent player image. opponentImage = image; //// sets the opponent player image for display $('#opponentImage').attr('src', opponentImage); //// Binds the opponent player image $('#divGameInfo').html("<br/><span><strong> Hey " + playerName + "! You are playing against <i>" + data + "</i> </strong></span>"); //// displays the information of opponent that the player is playing against. //// Draw the tic-tac-toe game board, A 3x3 grid :) by proper styling. for (var i = 0; i < 9; i++) { $("#divGameBoard").append("<span class='marker' id=" + i + " style='display:block;border:2px solid black;height:100px;width:100px;float:left;margin:10px;'>" + i + "</span>"); } }); First we need to have a Game object to track a game, players involved, moves left, and check if there is a winner. We will have a Game class defined as per the following code. The comments detail the purpose of the methods and the properties defined: internal class Game { /// <summary> /// Gets or sets the value indicating whether the game is over. /// </summary> public bool IsOver { get; private set; } /// <summary> /// Gets or sets the value indicating whether the game is draw. /// </summary> public bool IsDraw { get; private set; } /// <summary> /// Gets or sets Player 1 of the game /// </summary> public Player Player1 { get; set; } /// <summary> /// Gets or sets Player 2 of the game /// </summary> public Player Player2 { get; set; } /// <summary> /// For internal housekeeping, To keep track of value in each of the box in the grid. /// </summary> private readonly int[] field = new int[9]; /// <summary> /// The number of moves left. We start the game with 9 moves remaining in a 3x3 grid. /// </summary> private int movesLeft = 9; /// <summary> /// Initializes a new instance of the <see cref="Game"/> class. /// </summary> public Game() { //// Initialize the game for (var i = 0; i < field.Length; i++) { field[i] = -1; } } /// <summary> /// Place the player number at a given position for a player /// </summary> /// <param name="player">The player number would be 0 or 1</param> /// <param name="position">The position where player number would be placed, should be between 0 and ///8, both inclusive</param> /// <returns>Boolean true if game is over and we have a winner.</returns> public bool Play(int player, int position) { if (this.IsOver) { return false; } //// Place the player number at the given position this.PlacePlayerNumber(player, position); //// Check if we have a winner. If this returns true, //// game would be over and would have a winner, else game would continue. return this.CheckWinner(); } } Now we have the entire game mystery solved with the Game class. We know when the game is over, we have the method to place the player marker, and check the winner. The following server side-code on the GameHub will handle Steps 7 and 8: /// <summary> /// The list of games going on. /// </summary> private static readonly ConcurrentBag<Game> games = new ConcurrentBag<Game>(); /// <summary> /// To simulate the coin toss. Like heads and tails, 0 belongs to one player and 1 to opponent. /// </summary> private static readonly Random toss = new Random(); /// <summary> /// Finds the opponent for the player and sets the Seraching for Opponent property of player to true. /// We will use the connection id from context to identify the current player. /// Once we have 2 players looking to play, we can pair them and simulate coin toss to start the game. /// </summary> public void FindOpponent() { //// First fetch the player from our players collection having current connection id var player = players.FirstOrDefault(x => x.ConnectionId == Context.ConnectionId); if (player == null) { //// Since player would be registered before making this call, //// we should not reach here. If we are here, something somewhere in the flow above is broken. return; } //// Set that player is seraching for opponent. player.IsSearchingOpponent = true; //// We will follow a queue, so find a player who registered earlier as opponent. //// This would only be the case if more than 2 players are looking for opponent. var opponent = players.Where(x => x.ConnectionId != Context.ConnectionId && x.IsSearchingOpponent && !x.IsPlaying).OrderBy(x =>x.RegisterTime).FirstOrDefault(); if (opponent == null) { //// Could not find any opponent, invoke opponentNotFound method in the client. Clients.Client(Context.ConnectionId) .InvokeAsync(Constants.OpponentNotFound); return; } //// Set both players as playing. player.IsPlaying = true; player.IsSearchingOpponent = false; //// Make him unsearchable for opponent search opponent.IsPlaying = true; opponent.IsSearchingOpponent = false; //// Set each other as opponents. player.Opponent = opponent; opponent.Opponent = player; //// Notify both players that they can play by invoking opponentFound method for both the players. //// Also pass the opponent name and opoonet image, so that they can visualize it. //// Here we are directly using connection id, but group is a good candidate and use here. Clients.Client(Context.ConnectionId) .InvokeAsync(Constants.OpponentFound, opponent.Name, opponent.Image); Clients.Client(opponent.ConnectionId) .InvokeAsync(Constants.OpponentFound, player.Name, player.Image); //// Create a new game with these 2 player and add it to games collection. games.Add(new Game { Player1 = player, Player2 = opponent }); } Here, we have created a games collection to keep track of ongoing games and a Random field named toss to simulate the coin toss. How FindOpponent works is documented in the comments and is intuitive to understand. Once the game starts, each player has to make a move and then wait for the opponent to make a move, until the game ends. The move is made by clicking on the available grid cells. Here, we need to ensure that cell position that is already marked by one of the players is not changed or marked. So, as soon as a valid cell is marked, we set its CSS class to notAvailable so we know that the cell is taken. While clicking on a cell, we will check whether the cell has notAvailablestyle. If yes, it cannot be marked. If not, the cell can be marked and we then send the marked position to the server hub. We also see the waitingForMove, moveMade, gameOver, and opponentDisconnected events invoked by the server based on the game state. The code is commented and is pretty straightforward. The moveMade method in the following code makes use of the MoveInformation class, which we will define at the server for sharing move information with both players: //// Triggers on clicking the grid cell. $(document).on('click', '.marker', function () { if ($(this).hasClass("notAvailable")) { //// Cell is already taken. return; } hubConnection.invoke('MakeAMove', $(this)[0].id); //// Cell is valid, send details to hub. }); //// Fires when player has to make a move. hubConnection.on('waitingForMove', data => { $('#divInfo').html("<br/><span><strong> Your turn <i>" + playerName + "</i>! Make a winning move! </strong></span>"); }); //// Fires when move is made by either player. hubConnection.on('moveMade', data => { if (data.Image == playerImage) { //// Move made by player. $("#" + data.ImagePosition).addClass("notAvailable"); $("#" + data.ImagePosition).css('background-image', 'url(' + data.Image + ')'); $('#divInfo').html("<br/><strong>Waiting for <i>" + data.OpponentName + "</i> to make a move. </strong>"); } else { $("#" + data.ImagePosition).addClass("notAvailable"); $("#" + data.ImagePosition).css('background-image', 'url(' + data.Image + ')'); $('#divInfo').html("<br/><strong>Waiting for <i>" + data.OpponentName + "</i> to make a move. </strong>"); } }); //// Fires when the game ends. hubConnection.on('gameOver', data => { $('#divGame').hide(); $('#divInfo').html("<br/><span><strong>Hey " + playerName + "! " + data + " </strong></span>"); $('#divGameBoard').html(" "); $('#divGameInfo').html(" "); $('#divOpponentPlayer').hide(); }); //// Fires when the opponent disconnects. hubConnection.on('opponentDisconnected', data => { $("#divRegister").hide(); $('#divGame').hide(); $('#divGameInfo').html(" "); $('#divInfo').html("<br/><span><strong>Hey " + playerName + "! Your opponent disconnected or left the battle! You are the winner ! Hip Hip Hurray!!!</strong></span>"); }); After every move, both players need to be updated by the server about the move made, so that both players' game boards are in sync. So, on the server side we will need an additional model called MoveInformation, which will contain information on the latest move made by the player and the server will send this model to both the clients to keep them in sync: /// <summary> /// While playing the game, players would make moves. This class contains the information of those moves. /// </summary> internal class MoveInformation { /// <summary> /// Gets or sets the opponent name. /// </summary> public string OpponentName { get; set; } /// <summary> /// Gets or sets the player who made the move. /// </summary> public string MoveMadeBy { get; set; } /// <summary> /// Gets or sets the image position. The position in the game board (0-8) where the player placed his /// image. /// </summary> public int ImagePosition { get; set; } /// <summary> /// Gets or sets the image. The image of the player that he placed in the board (0-8) /// </summary> public string Image { get; set; } } Finally, we will wire up the remaining methods in the GameHub class to complete the game coding. The MakeAMove method is called every time a player makes a move. Also, we have overidden the OnDisconnectedAsync method to inform a player when their opponent disconnects. In this method, we also keep our players and games list current. The comments in the code explain the workings of the methods: /// <summary> /// Invoked by the player to make a move on the board. /// </summary> /// <param name="position">The position to place the player</param> public void MakeAMove(int position) { //// Lets find a game from our list of games where one of the player has the same connection Id as the current connection has. var game = games?.FirstOrDefault(x => x.Player1.ConnectionId == Context.ConnectionId || x.Player2.ConnectionId == Context.ConnectionId); if (game == null || game.IsOver) { //// No such game exist! return; } //// Designate 0 for player 1 int symbol = 0; if (game.Player2.ConnectionId == Context.ConnectionId) { //// Designate 1 for player 2. symbol = 1; } var player = symbol == 0 ? game.Player1 : game.Player2; if (player.WaitingForMove) { return; } //// Update both the players that move is made. Clients.Client(game.Player1.ConnectionId) .InvokeAsync(Constants.MoveMade, new MoveInformation { OpponentName = player.Name, ImagePosition = position, Image = player.Image }); Clients.Client(game.Player2.ConnectionId) .InvokeAsync(Constants.MoveMade, new MoveInformation { OpponentName = player.Name, ImagePosition = position, Image = player.Image }); //// Place the symbol and look for a winner after every move. if (game.Play(symbol, position)) { Remove<Game>(games, game); Clients.Client(game.Player1.ConnectionId) .InvokeAsync(Constants.GameOver, $"The winner is {player.Name}"); Clients.Client(game.Player2.ConnectionId) .InvokeAsync(Constants.GameOver, $"The winner is {player.Name}"); player.IsPlaying = false; player.Opponent.IsPlaying = false; this.Clients.Client(player.ConnectionId) .InvokeAsync(Constants.RegistrationComplete); this.Clients.Client(player.Opponent.ConnectionId) .InvokeAsync(Constants.RegistrationComplete); } //// If no one won and its a tame draw, update the players that the game is over and let them look for new game to play. if (game.IsOver && game.IsDraw) { Remove<Game>(games, game); Clients.Client(game.Player1.ConnectionId) .InvokeAsync(Constants.GameOver, "Its a tame draw!!!"); Clients.Client(game.Player2.ConnectionId) .InvokeAsync(Constants.GameOver, "Its a tame draw!!!"); player.IsPlaying = false; player.Opponent.IsPlaying = false; this.Clients.Client(player.ConnectionId) .InvokeAsync(Constants.RegistrationComplete); this.Clients.Client(player.Opponent.ConnectionId) .InvokeAsync(Constants.RegistrationComplete); } if (!game.IsOver) { player.WaitingForMove = !player.WaitingForMove; player.Opponent.WaitingForMove = !player.Opponent.WaitingForMove; Clients.Client(player.Opponent.ConnectionId) .InvokeAsync(Constants.WaitingForOpponent, player.Opponent.Name); Clients.Client(player.ConnectionId) .InvokeAsync(Constants.WaitingForOpponent, player.Opponent.Name); } } With this, we are done with the coding of the game and are ready to run the game app. So there you have it! You've just built your first game in .NET Core! The detailed source code can be downloaded from Github. If you're interested in learning more, head on over to get the book, .NET Core 2.0 By Example, by Rishabh Verma and Neha Shrivastava. Applying Single Responsibility principle from SOLID in .NET Core Unit Testing in .NET Core with Visual Studio 2017 for better code quality Get to know ASP.NET Core Web API [Tutorial]
Read more
  • 0
  • 1
  • 29195

article-image-understanding-go-internals-defer-panic-and-recover-functions-tutorial
Packt Editorial Staff
09 Jul 2018
8 min read
Save for later

Understanding Go Internals: defer, panic() and recover() functions [Tutorial]

Packt Editorial Staff
09 Jul 2018
8 min read
The Go programming language, often referred to as Golang, is making strides with masterclass developments and architecture by the greatest programming minds.  The Go features are extremely handy, and you can use them all the time. However, there is nothing more rewarding than being able to see and understand what is going on in the background and how Go operates behind the scenes. In this article we will learn to use the defer keyword, panic() and recover() functions in Go. This article is extracted from the First Edition of Mastering Go written by Mihalis Tsoukalos. The concepts discussed in this article (and more) have been updated or improved in the third edition of Mastering Go. The defer keyword The defer keyword postpones the execution of a function until the surrounding function returns. It is widely used in file input and output operations because it saves you from having to remember when to close an opened file: the defer keyword allows you to put the function call that closes an opened file near to the function call that opened it. You will also see defer in action in the section that talks about the panic()  and recover() built-in Go functions. It is very important to remember that deferred functions are executed in Last In First Out (LIFO) order after the return of the surrounding function. Put simply, this means that if you defer function f1() first, function f2() second, and function f3() third in the same surrounding function, when the surrounding function is about to return, function f3() will be executed first, function f2() will be executed second, and function f1() will be the last one to get executed. As this definition of defer is a little unclear, I think that you will understand the use of defer a little better by looking at the Go code and the output of the defer.go  program, which will be presented in three parts. The first part of the program follows: package main import (  "fmt" ) func d1() { for i := 3; i > 0; i-- { defer fmt.Print(i, " ") } } Apart from the import block, the preceding Go code implements a function named d1() with a for loop and a defer statement that will be executed three times. The second part of defer.go   contains the following Go code: func d2() { for i := 3; i > 0; i-- { defer func() { fmt.Print(i, " ") }() } fmt.Println() } In this part of the code, you can see the implementation of another function that is named d2(). The d2() function also contains a for loop and a defer statement that will be also executed three times. However, this time the defer keyword is applied to an anonymous function instead of a single fmt.Print() statement. Additionally, the anonymous function takes no parameters. The last part of the Go code follows: func d3() { for i := 3; i > 0; i-- { defer func(n int) { fmt.Print(n, " ") }(i) } } func main() { d1() d2() fmt.Println() d3() fmt.Println() } Apart from the main() function that calls the d1(), d2(), and d3() functions, you can also see the implementation of the d3() function, which has a for loop that uses the defer keyword on an anonymous function. However, this time the anonymous function requires one integer parameter named n. The Go code tells us that the n parameter takes its value from the i variable used in the for loop. Executing defer.go will create the following output: $ go run defer.go 1 2 3 0 0 0 1 2 3 You will most likely find the generated output complicated and challenging to understand. This underscores the fact that the operation and the results of the use of defer can be tricky if your code is not clear and unambiguous. Let's examine the results in order to get a better idea of how tricky defer can be if you do not pay close attention to your code. We will start with the first line of the output (1 2 3), which is generated by the d1() function. The values of i in d1() are 3, 2, and 1 in that order. The function that is deferred in d1() is the fmt.Print() statement. As a result, when the d1() function is about to return, you get the three values of the i variable of the for loop in reverse order because deferred functions are executed in LIFO order. Now, let us inspect the second line of the output that is produced by the d2() function. It is really strange that we got three zeros instead of 1 2 3 in the output. The reason for this, however, is relatively simple. After the for loop has ended, the value of i is 0, because it is that value of i that made the for loop terminate. However, the tricky part here is that the deferred anonymous function is evaluated after the for loop ends, because it has no parameters. This means that is evaluated three times for an i value of 0, hence the generated output! This kind of confusing code is what might lead to the creation of nasty bugs in your projects, so try to avoid it! Last, we will talk about the third line of the output, which is generated by the d3() function. Due to the parameter of the anonymous function, each time the anonymous function is deferred, it gets and uses the current value of i. As a result, each execution of the anonymous function has a different value to process, thus the generated output. After this, it should be clear that the best approach to the use of defer is the third one, which is exhibited in the d3() function. This is so because you intentionally pass the desired variable in the anonymous function in an easy to understand way. Panic and Recover This technique involves the use of the panic() and recover() functions, and it will be presented in panicRecover.go, which you will review in three parts. Strictly speaking, panic() is a built-in Go function that terminates the current flow of a Go program and starts panicking! On the other hand, the recover() function, which is also a built-in Go function, allows you to take back the control of a goroutine that just panicked using panic(). The first part of the program follows: package main import ( "fmt" ) func a() { fmt.Println("Inside a()") defer func() { if c := recover(); c != nil { fmt.Println("Recover inside a()!") } }() fmt.Println("About to call b()") b() fmt.Println("b() exited!") fmt.Println("Exiting a()") } Apart from the import block, this part includes the implementation of the a() function. The most important part of the a() function is the defer block of code, which implements an anonymous function that will be called when there is a call to panic(). The second code segment of panicRecover.go follows next: func b() { fmt.Println("Inside b()") panic("Panic in b()!") fmt.Println("Exiting b()") } The last part of the program, which illustrates the panic() and recover() functions, is as follows: func main() { a() fmt.Println("main() ended!") } Executing panicRecover.go will create the following output: $ go run panicRecover.go Inside a() About to call b() Inside b() Recover inside a()! main() ended! What just happened here is really impressive! However, as you can see from the output, the a() function did not end normally, because its last two statements did not get executed: fmt.Println("b() exited!") fmt.Println("Exiting a()") Nevertheless, the good thing is that panicRecover.go ended according to our will without panicking because the anonymous function used in defer took control of the situation! Also note that function b() knows nothing about function a(). However, function a() contains Go code that handles the panic condition of function b()! Using the panic function on its own You can also use the panic() function on its own without any attempt to recover, and this subsection will show you its results using the Go code of justPanic.go, which will be presented in two parts. The first part of justPanic.go follows next: package main import ( "fmt" "os" ) As you can see, the use of panic() does not require any extra Go packages. The second part of justPanic.go is shown in the following Go code: func main() { if len(os.Args) == 1 { panic("Not enough arguments!") } fmt.Println("Thanks for the argument(s)!") } If your Go program does not have at least one command line argument, it will call the panic() function. The panic() function takes one parameter, which is the error message that you want to print on the screen. Executing justPanic.go on a macOS High Sierra machine will create the following output: $ go run justPanic.go panic: Not enough arguments! goroutine 1 [running]: main.main() /Users/mtsouk/ch2/code/justPanic.go:10 +0x9e exit status 2 Thus, using the panic() function on its own will terminate the Go program without giving you the opportunity to recover! Therefore use of the panic() and recover() pair is much more practical and professional than just using panic() alone. To summarize, we covered some of the interesting Go topics like; defer keyword; the panic() and recover() functions. To explore other major features and packages in Go, get our latest edition in Go programming, Mastering Go, written by Mihalis Tsoukalos. Implementing memory management with Golang’s garbage collector Why is Go the go-to language for cloud native development? – An interview with Mina Andrawos How to build a basic server side chatbot using Go How Concurrency and Parallelism works in Golang [Tutorial]  
Read more
  • 0
  • 0
  • 29173

article-image-how-to-deploy-a-blog-with-ghost-and-docker
Felix Rabe
07 Nov 2014
6 min read
Save for later

How to Deploy a Blog with Ghost and Docker

Felix Rabe
07 Nov 2014
6 min read
2013 gave birth to two wonderful Open Source projects: Ghost and Docker. This post will show you what the buzz is all about, and how you can use them together. So what are Ghost and Docker, exactly? Ghost is an exciting new blogging platform, written in JavaScript running on Node.js. It features a simple and modern user experience, as well as very transparent and accessible developer communications. This blog post covers Ghost 0.4.2. Docker is a very useful new development tool to package applications together with their dependencies for automated and portable deployment. It is based on Linux Containers (lxc) for lightweight virtualization, and AUFS for filesystem layering. This blog post covers Docker 1.1.2. Install Docker If you are on Windows or Mac OS X, the easiest way to get started using Docker is Boot2Docker. For Linux and more in-depth instructions, consult one of the Docker installation guides. Go ahead and install Docker via one of the above links, then come back and run: docker version You run this in your terminal to verify your installation. If you get about eight lines of detailed version information, the installation was successful. Just running docker will provide you with a list of commands, and docker help <command> will show a command's usage. If you use Boot2Docker, remember to export DOCKER_HOST=tcp://192.168.59.103:2375. Now, to get the Ubuntu 14.04 base image downloaded (which we'll use in the next sections), run the following command: docker run --rm ubuntu:14.04 /bin/true This will take a while, but only for the first time. There are many more Docker images available at the Docker Hub Registry. Hello Docker To give you a quick glimpse into what Docker can do for you, run the following command: docker run --rm ubuntu:14.04 /bin/echo Hello Docker This runs /bin/echo Hello Docker in its own virtual Ubuntu 14.04 environment, but since it uses Linux Containers instead of booting a complete operating system in a virtual machine, this only takes less than a second to complete. Pretty sweet, huh? To run Bash, provide the -ti flags for interactivity: docker run --rm -ti ubuntu:14.04 /bin/bash The --rm flag makes sure that the container gets removed after use, so any files you create in that Bash session get removed after logging out. For more details, see the Docker Run Reference. Build the Ghost image In the previous section, you've run the ubuntu:14.04 image. In this section, we'll build an image for Ghost that we can then use to quickly launch a new Ghost container. While you could get a pre-made Ghost Docker image, for the sake of learning, we'll build our own. About the terminology: A Docker image is analogous to a program stored on disk, while a Docker container is analogous to a process running in memory. Now create a new directory, such as docker-ghost, with the following files — you can also find them in this Gist on GitHub: package.json: {} This is the bare minimum actually required, and will be expanded with the current Ghost dependency by the Dockerfile command npm install --save ghost when building the Docker image. server.js: #!/usr/bin/env node var ghost = require('ghost'); ghost({ config: __dirname + '/config.js' }); This is all that is required to use Ghost as an NPM module. config.js: config = require('./node_modules/ghost/config.example.js'); config.development.server.host = '0.0.0.0'; config.production.server.host = '0.0.0.0'; module.exports = config; This will make the Ghost server accessible from outside of the Docker container. Dockerfile: # DOCKER-VERSION 1.1.2 FROM ubuntu:14.04 # Speed up apt-get according to https://gist.github.com/jpetazzo/6127116 RUN echo "force-unsafe-io" > /etc/dpkg/dpkg.cfg.d/02apt-speedup RUN echo "Acquire::http {No-Cache=True;};" > /etc/apt/apt.conf.d/no-cache # Update the distribution ENV DEBIAN_FRONTEND noninteractive RUN apt-get update RUN apt-get upgrade -y # https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager RUN apt-get install -y software-properties-common RUN add-apt-repository -y ppa:chris-lea/node.js RUN apt-get update RUN apt-get install -y python-software-properties python g++ make nodejs git # git needed by 'npm install' ADD . /src RUN cd /src; npm install --save ghost ENTRYPOINT ["node", "/src/server.js"] # Override ubuntu:14.04 CMD directive: CMD [] EXPOSE 2368 This Dockerfile will create a Docker image with Node.js and the dependencies needed to build the Ghost NPM module, and prepare Ghost to be run via Docker. See Documentation for details on the syntax. Now build the Ghost image using: cd docker-ghost docker build -t ghost-image . This will take a while, but you might have to Ctrl-C and re-run the command if, for more than a couple of minutes, you are stuck at the following step: > node-pre-gyp install --fallback-to-build Run Ghost Now start the Ghost container: docker run --name ghost-container -d -p 2368:2368 ghost-image If you run Boot2Docker, you'll have to figure out its IP address: boot2docker ip Usually, that's 192.168.59.103, so by going to http://192.168.59.103:2368, you will see your fresh new Ghost blog. Yay! For the admin interface, go to http://192.168.59.103:2368/ghost. Manage the Ghost container The following commands will come in handy to manage the Ghost container: # Show all running containers: docker ps -a # Show the container logs: docker logs [-f] ghost-container # Stop Ghost via a simulated Ctrl-C: docker kill -s INT ghost-container # After killing Ghost, this will restart it: docker start ghost-container # Remove the container AND THE DATA (!): docker rm ghost-container What you'll want to do next Some steps that are outside the scope of this post, but some steps that you might want to pursue next, are: Copy and change the Ghost configuration that currently resides in node_modules/ghost/config.js. Move the Ghost content directory into a separate Docker volume to allow for upgrades and data backups. Deploy the Ghost image to production on your public server at your hosting provider. Also, you might want to change the Ghost configuration to match your domain and change the port to 80. How I use Ghost with Docker I run Ghost in Docker successfully over at Named Data Education, a new blog about Named Data Networking. I like the fact that I can replicate an isolated setup identically on that server as well as on my own laptop. Ghost resources Official docs: The Ghost Guide, and the FAQ- / How-To-like User Guide. How To Install Ghost, Ghost for Beginners and All About Ghost are a collection of sites that provide more in-depth material on operating a Ghost blog. By the same guys: All Ghost Themes. Ghost themes on ThemeForest is also a great collection of themes. Docker resources The official documentation provides many guides and references. Docker volumes are explained here and in this post by Michael Crosby. About the Author Felix Rabe has been programming and working with different technologies and companies at different levels since 1993. Currently he is researching and promoting Named Data Networking (http://named-data.net/), an evolution of the Internet architecture that currently relies on the host-bound Internet Protocol. You can find our very best Docker content on our dedicated Docker page. Whatever you do with software, Docker will help you do it better.
Read more
  • 0
  • 0
  • 29148
article-image-deploy-nodejs-apps-aws-code-deploy
Ankit Patial
14 Sep 2015
7 min read
Save for later

Deploy Node.js Apps with AWS Code Deploy

Ankit Patial
14 Sep 2015
7 min read
As an application developer, you must be familiar with the complexity of deploying apps to a fleet of servers with minimum down time. AWS introduced a new service called AWS Code Deploy to ease out the deployment of applications to an EC2 instance on the AWS cloud. Before explaining the full process, I will assume that you are using AWS VPC and are having all of your EC2 instances inside VPC, and that each instance is having an IAM role. Let's see how we can deploy a Node.js application to AWS. Install AWS Code Deploy Agent The first thing you need to do is to install aws-codedeploy-agent on each machine that you want your code deployed on. Before installing a client, please make sure that you have trust relationship for codedeploy.us-west-2.amazonaws.com and codedeploy.us-east-1.amazonaws.com added in IAM role that EC2 instance is using. Not sure what it is? Then, click on the top left dropdown with your account name in AWS console, select Security Credentials option you will be redirected to a new page, select Roles from left menu and look for IAM role that EC2 instance is using, click it and scroll to them bottom, and you will see Edit Trust Relationship button. Click this button to edit trust relationship, and make sure it looks like the following. ... "Principal": { "Service": [ "ec2.amazonaws.com", "codedeploy.us-west-2.amazonaws.com", "codedeploy.us-east-1.amazonaws.com" ] } ... Ok we are good to install the AWS Code Deploy Agent, so make sure ruby2.0 is installed. Use the following script to install code deploy agent. aws s3 cp s3://aws-codedeploy-us-east-1/latest/install ./install-aws-codedeploy-agent --region us-east-1 chmod +x ./install-aws-codedeploy-agent sudo ./install-aws-codedeploy-agent auto rm install-aws-codedeploy-agent Ok, hopefully nothing will go wrong and agent will be installed up and running. To check if its running or not, try the following command: sudo service codedeploy-agent status Let's move to the next step. Create Code Deploy Application Login to your AWS account. Under Deployment & Management click on Code Deploy link, on next screen click on the Get Started Now button and complete the following things: Choose Custom Deployment and click the Skip Walkthrough button. Create New Application; the following are steps to create an application. –            Application Name: display name for application you want to deploy. –            Deployment Group Name: this is something similar to environments like LIVE, STAGING and QA. –            Add Instances: you can choose Amazon EC2 instances by name group name etc. In case you are using autoscaling feature, you can add that auto scaling group too. –            Deployment Config: its a way to specify how we want to deploy application, whether we want to deploy one server at-a-time or half of servers at-a-time or deploy all at-once. –            Service Role: Choose the IAM role that has access to S3 bucket that we will use to hold code revisions. –            Hit the Create Application button. Ok, we just created a Code Deploy application. Let's hold it here and move to our NodeJs app to get it ready for deployment. Code Revision Ok, you have written your app and you are ready to deploy it. The most important thing your app need is appspec.yml. This file will be used by code deploy agent to perform various steps during the deployment life cycle. In simple words the deployment process includes the following steps: Stop the previous application if already deployed; if its first time then this step will not exist. Update the latest code, such as copy files to the application directory. Install new packages or run DB migrations. Start the application. Check if the application is working. Rollback if something went wrong. All above steps seem easy, but they are time consuming and painful to perform each time. Let's see how we can perform these steps easily with AWS code deploy. Lets say we have a following appspec.yml file in our code and also we have bin folder in an app that contain executable sh scripts to perform certain things that I will explain next. First of all take an example of appspec.yml: version: 0.0 os: linux files: - source: / destination: /home/ec2-user/my-app permissions: - object: / pattern: "**" owner: ec2-user group: ec2-user hooks: ApplicationStop: - location: bin/app-stop timeout: 10 runas: ec2-user AfterInstall: - location: bin/install-pkgs timeout: 1200 runas: ec2-user ApplicationStart: - location: bin/app-start timeout: 60 runas: ec2-user ValidateService: - location: bin/app-validate timeout: 10 runas: ec2-user It's a way to tell Code Deploy to copy and provide a destination of those files. files: - source: / destination: /home/ec2-user/my-app We can specify the permissions to be set for the source file on copy. permissions: - object: / pattern: "**" owner: ec2-user group: ec2-user Hooks are executed in an order during the Code Deploy life cycle. We have ApplicationStop, DownloadBundle, BeforeInstall, Install, AfterInstall, ApplicationStart and ValidateService hooks that all have the same syntax. hooks: deployment-lifecycle-event-name - location: script-location timeout: timeout-in-seconds runas: user-name location is the relative path from code root to script file that you want to execute. timeout is the maximum time a script can run. runas is an os user to run the script, and some time you may want to run a script with diff user privileges. Lets bundle your app, exclude the unwanted files such as node_modules folder, and zip it. I use AWS CLI to deploy my code revisions, but you can install awscli using PPI (Python Package Index). sudo pip install awscli I am using awscli profile that has access to s3 code revision bucket in my account. Here is code sample that can help: aws --profile simsaw-baas deploy push --no-ignore-hidden-files --application-name MY_CODE_DEPLOY_APP_NAME --s3-location s3://MY_CODE_REVISONS/MY_CODE_DEPLOY_APP_NAME/APP_BUILD --source MY_APP_BUILD.zip Now Code Revision is published to s3 and also the same revision is registered with the Code Deploy application with the name MY_CODE_DEPLOY_APP_NAME (it will be name of the application you created earlier in the second step.) Now go back to AWS console, Code Deploy. Deploy Code Revision Select your Code Deploy application from the application list show on the Code Deploy Dashboard. It will take you to the next window where you can see the published revision(s), expand the revision and click on Deploy This Revision. You will be redirected to a new window with options like application and deployment group. Choose them carefully and hit deploy. Wait for magic to happen. Code Deploy has a another option to deploy your app from github. The process for it will be almost the same, except you need not push code revisions to S3. About the author Ankit Patial has a Masters in Computer Applications, and nine years of experience with custom APIs, web and desktop applications using .NET technologies, ROR and NodeJs. As a CTO with SimSaw Inc and Pink Hand Technologies, his job is to learn and help his team to implement the best practices of using Cloud Computing and JavaScript technologies.
Read more
  • 0
  • 0
  • 29097

article-image-learning-minecraft-mods
Aaron Mills
12 Jun 2015
6 min read
Save for later

Learning with Minecraft Mods

Aaron Mills
12 Jun 2015
6 min read
Minecraft has shaped a generation of gamers. It's popular with all ages, but elementary age kids live and breath it. Inevitably, someone starts to wonder whether this is a good thing, whether that be parents, teachers, or the media. But something that is often overlooked is the influence that Minecraft Mods can have on that equation. Minecraft Mods come in many different flavors and varieties. The Minecraft Modding community is perhaps the largest such community to exist, and Minecraft lends itself well to presenting real world problems as part of the game world. Due to the shear number of mods, you can find many examples that incorporate some form of beneficial learning that has real world applications. For example, there are mods that incorporate aspects of engineering, systems design, genetics, logic puzzles, computer programming, and more. Vanilla Minecraft aside, let us take a look at the kinds of challenges and tasks that foster learning in various Minecraft Mods. Many mods include some kind of interactive machine system. Whether it be pipes or wires, they all have specific rules for construction and present the player with the challenge of combining a bunch of simple pieces together to create something more. Usually the end result is a factory for manufacturing more items and blocks to build yet more machinery or a power plant for powering all that machinery. Construction typically requires logical problem solving and spatial comprehension. Running a pipe from one end of your factory to the other can be just as complex a piece of spaghetti as in a real factory. There are many mods that focus on these challenges, including Buildcraft, EnderIO, IndustrialCraft2, PneumaticCraft, and more. These mods are also generally capable of interacting with each other seamlessly for even more creative solutions for your factory floor. But factories aren’t the only logic problems that mods present. There are also many logic puzzles built into mods. My own mod, Railcraft, has a fully functional Train routing and signaling system. It's strongly based on real life examples of railroads and provides many of the same solutions you’ll find real railway engineers using to solve the challenges of a railway. Problems that a budding railway engineer faces include scheduling, routing, best usage of track, avoiding collisions using signal logic, and more. But there are many other mods out there with similar types of puzzles. Some item management and piping systems are very logic driven. Logistics Pipes and Applied Energetics 2 are a couple such mods that take things just a step beyond normal pipe mods, in terms of both the amount of logical thinking required and the system’s overall capabilities. Both mods allow you to intelligently manage supply and demand of items across an entire base using logic modules that you install in the machines and pipes. This is all well and good of course, but there are some mods that take this even further. When it comes to logic, some mods allow you to actually write computer code. ComputerCraft and OpenComputers are two mods that allow you to use LUA to control in-game displays, robots, and more. There are even add-ons to these mods that allow you to control a Railcraft railway network from an in-game computer screen. Robot programming is generally very similar to the old “move the turtle around the screen” introductory programming lessons; ComputerCraft even calls its robots Turtles. You can instruct them to move and interact with the world, creating complex structures or just mining out an entire area for ore. The more complex the task, the more complex the code required. However, while mechanical and logic based problems are great, they are far from all that Minecraft Mods have to offer. Another area that has received a lot of attention from mods is Genetics. The first major mod to pioneer Genetics was IndustrialCraft2 with its Crop Breeding mechanics. As an added bonus, IC2 also provides an interesting power system. However, when most people think of Genetics in Mods, the first mod they think of is the Forestry Mod by SirSengir. In Forestry, you can breed Bees, Trees, and Butterflies. But there are other mods, too, such as Mariculture, which allows you to breed Fish. Genetics systems generally provide a wide range of traits and abilities that can be bred into the organisms: for example, increasing the yields of crops or improving the speed at which your bees work, and even breeding in more interesting traits, such as bees that heal you when you get close to the hive. The systems are generally fairly representative of Mendelian Inheritance: each individual has two sets of genes, each one a random set of genes from each parent. There are dominant and recessive genes and the two sets combined give your individual its specific traits. Punnett Squares are encouraged, just like those taught in school. Speaking of school, no discussion of learning and Minecraft would be complete without at least mentioning MinecraftEdu. TeacherGaming, the company behind MinecraftEdu, partnered with Mojang to provide a version of Minecraft specifically tailored for school programs. Of note is the fact that MinecraftEdu ships ready for use with mods and even recommends some of the mods mentioned in this post, including a special version of ComputerCraft created just for the MinecraftEdu project. Real schools use this stuff in real classrooms, teaching kids lessons using Minecraft and Minecraft Mods. So yes, there are many things that can be learned by playing Minecraft, especially if you play with the right mods. So should we be worried about the current generational obsession with Minecraft? Probably not. There are much less edifying things these kids could be playing. So next time your kid starts lecturing you about Mendelian Inheritance, remember it's quite possible he learned about it while playing Minecraft. About the Author Aaron Mills was born in 1983 and lives in the Pacific Northwest, which is a land rich in lore, trees, and rain. He has a Bachelor's Degree in Computer Science and studied at Washington State University Vancouver. He is best known for his work on the Minecraft Mod, Railcraft, but has also contributed significantly to the Minecraft Mods of Forestry and Buildcraft as well some contributions to the Minecraft Forge project.
Read more
  • 0
  • 0
  • 29057
Modal Close icon
Modal Close icon