Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-patterns-data-processing
Packt
29 Apr 2015
12 min read
Save for later

Patterns for Data Processing

Packt
29 Apr 2015
12 min read
In this article by Marcus Young, the author of the book Implementing Cloud Design Patterns for AWS, we will cover the following patterns: Queuing chain pattern Job observer pattern (For more resources related to this topic, see here.) Queuing chain pattern In the queuing chain pattern, we will use a type of publish-subscribe model (pub-sub) with an instance that generates work asynchronously, for another server to pick it up and work with. This is described in the following diagram: The diagram describes the scenario we will solve, which is solving fibonacci numbers asynchronously. We will spin up a Creator server that will generate random integers, and publish them into an SQS queue myinstance-tosolve. We will then spin up a second instance that continuously attempts to grab a message from the queue myinstance-tosolve, solves the fibonacci sequence of the numbers contained in the message body, and stores that as a new message in the myinstance-solved queue. Information on the fibonacci algorithm can be found at http://en.wikipedia.org/wiki/Fibonacci_number. This scenario is very basic as it is the core of the microservices architectural model. In this scenario, we could add as many worker servers as we see fit with no change to infrastructure, which is the real power of the microservices model. The first thing we will do is create a new SQS queue. From the SQS console select Create New Queue. From the Create New Queue dialog, enter myinstance-tosolve into the Queue Name text box and select Create Queue. This will create the queue and bring you back to the main SQS console where you can view the queues created. Repeat this process, entering myinstance-solved for the second queue name. When complete, the SQS console should list both the queues. In the following code snippets, you will need the URL for the queues. You can retrieve them from the SQS console by selecting the appropriate queue, which will bring up an information box. The queue URL is listed as URL in the following screenshot: Next, we will launch a creator instance, which will create random integers and write them into the myinstance-tosolve queue via its URL noted previously. From the EC2 console, spin up an instance as per your environment from the AWS Linux AMI. Once it is ready, SSH into it (note that acctarn, mykey, and mysecret need to be replaced with your actual credentials): [ec2-user@ip-10-203-10-170 ~]$ [[ -d ~/.aws ]] && rm -rf ~/.aws/config ||mkdir ~/.aws[ec2-user@ip-10-203-10-170 ~]$ echo $'[default]naws_access_key_id=mykeynaws_secret_access_key=mysecretnregion=us-east-1' > .aws/config[ec2-user@ip-10-203-10-170 ~]$ for i in {1..100}; dovalue=$(shuf -i 1-50 -n 1)aws sqs send-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-tosolve --message-body ${value} >/dev/null 2>&1done Once the snippet completes, we should have 100 messages in the myinstance-tosolve queue, ready to be retrieved. Now that those messages are ready to be picked up and solved, we will spin up a new EC2 instance: again as per your environment from the AWS Linux AMI. Once it is ready, SSH into it (note that acctarn, mykey, and mysecret need to be valid and set to your credentials): [ec2-user@ip-10-203-10-169 ~]$ [[ -d ~/.aws ]] && rm -rf ~/.aws/config ||mkdir ~/.aws[ec2-user@ip-10-203-10-169 ~]$ echo $'[default]naws_access_key_id=mykeynaws_secret_access_key=mysecretnregion=us-east-1' > .aws/config[ec2-user@ip-10-203-10-169 ~]$ sudo yum install -y ruby-devel gcc >/dev/null 2>&1[ec2-user@ip-10-203-10-169 ~]$ sudo gem install json >/dev/null 2>&1[ec2-user@ip-10-203-10-169 ~]$ cat <<EOF | sudo tee -a /usr/local/bin/fibsqs >/dev/null 2>&1#!/bin/shwhile [ true ]; dofunction fibonacci {a=1b=1i=0while [ $i -lt $1 ]doprintf "%dn" $alet sum=$a+$blet a=$blet b=$sumlet i=$i+1done}message=$(aws sqs receive-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-tosolve)if [[ -n $message ]]; thenbody=$(echo $message | ruby -e "require 'json'; p JSON.parse(gets)['Messages'][0]['Body']" | sed 's/"//g')handle=$(echo $message | ruby -e "require 'json'; p JSON.parse(gets)['Messages'][0]['ReceiptHandle']" | sed 's/"//g')aws sqs delete-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-tosolve --receipt-handle $handleecho "Solving '${body}'."solved=$(fibonacci $body)parsed_solve=$(echo $solved | sed 's/n/ /g')echo "'${body}' solved."aws sqs send-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-solved --message-body "${parsed_solve}"fisleep 1doneEOF[ec2-user@ip-10-203-10-169 ~]$ sudo chown ec2-user:ec2-user /usr/local/bin/fibsqs && chmod +x /usr/local/bin/fibsqs There will be no output from this code snippet yet, so now let's run the fibsqs command we created. This will continuously poll the myinstance-tosolve queue, solve the fibonacci sequence for the integer, and store it into the myinstance-solved queue: [ec2-user@ip-10-203-10-169 ~]$ fibsqsSolving '48'.'48' solved.{"MD5OfMessageBody": "73237e3b7f7f3491de08c69f717f59e6","MessageId": "a249b392-0477-4afa-b28c-910233e7090f"}Solving '6'.'6' solved.{"MD5OfMessageBody": "620b0dd23c3dddbac7cce1a0d1c8165b","MessageId": "9e29f847-d087-42a4-8985-690c420ce998"} While this is running, we can verify the movement of messages from the tosolve queue into the solved queue by viewing the Messages Available column in the SQS console. This means that the worker virtual machine is in fact doing work, but we can prove that it is working correctly by viewing the messages in the myinstance-solved queue. To view messages, right click on the myinstance-solved queue and select View/Delete Messages. If this is your first time viewing messages in SQS, you will receive a warning box that displays the impact of viewing messages in a queue. Select Start polling for Messages. From the View/Delete Messages in myinstance-solved dialog, select Start Polling for Messages. We can now see that we are in fact working from a queue. Job observer pattern The previous two patterns show a very basic understanding of passing messages around a complex system, so that components (machines) can work independently from each other. While they are a good starting place, the system as a whole could improve if it were more autonomous. Given the previous example, we could very easily duplicate the worker instance if either one of the SQS queues grew large, but using the Amazon-provided CloudWatch service we can automate this process. Using CloudWatch, we might end up with a system that resembles the following diagram: For this pattern, we will not start from scratch but directly from the previous priority queuing pattern. The major difference between the previous diagram and the diagram displayed in the priority queuing pattern is the addition of a CloudWatch alarm on the myinstance-tosolve-priority queue, and the addition of an auto scaling group for the worker instances. The behavior of this pattern is that we will define a depth for our priority queue that we deem too high, and create an alarm for that threshold. If the number of messages in that queue goes beyond that point, it will notify the auto scaling group to spin up an instance. When the alarm goes back to OK, meaning that the number of messages is below the threshold, it will scale down as much as our auto scaling policy allows. Before we start, make sure any worker instances are terminated. The first thing we should do is create an alarm. From the CloudWatch console in AWS, click Alarms on the side bar and select Create Alarm. From the new Create Alarm dialog, select Queue Metrics under SQS Metrics. This will bring us to a Select Metric section. Type myinstance-tosolve-priority ApproximateNumberOfMessagesVisible into the search box and hit Enter. Select the checkbox for the only row and select Next. From the Define Alarm, make the following changes and then select Create Alarm: In the Name textbox, give the alarm a unique name. In the Description textbox, give the alarm a general description. In the Whenever section, set 0 to 1. In the Actions section, click Delete for the only Notification. In the Period drop-down, select 1 Minute. In the Statistic drop-down, select Sum. Now that we have our alarm in place, we need to create a launch configuration and auto scaling group that refers this alarm. Create a new launch configuration from the AWS Linux AMI with details as per your environment. However, set the user data to (note that acctarn, mykey, and mysecret need to be valid): #!/bin/bash[[ -d /home/ec2-user/.aws ]] && rm -rf /home/ec2-user/.aws/config ||mkdir /home/ec2-user/.awsecho $'[default]naws_access_key_id=mykeynaws_secret_access_key=mysecretnregion=us-east-1' > /home/ec2-user/.aws/configchown ec2-user:ec2-user /home/ec2-user/.aws -Rcat <<EOF >/usr/local/bin/fibsqs#!/bin/shfunction fibonacci {a=1b=1i=0while [ $i -lt $1 ]doprintf "%dn" $alet sum=$a+$blet a=$blet b=$sumlet i=$i+1done}number="$1"solved=$(fibonacci $number)parsed_solve=$(echo $solved | sed 's/n/ /g')aws sqs send-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-solved --message-body "${parsed_solve}"exit 0EOFchown ec2-user:ec2-user /usr/local/bin/fibsqschmod +x /usr/local/bin/fibsqsyum install -y libxml2 libxml2-devel libxslt libxslt-devel gcc ruby-devel>/dev/null 2>&1gem install nokogiri -- --use-system-libraries >/dev/null 2>&1gem install shoryuken >/dev/null 2>&1cat <<EOF >/home/ec2-user/config.ymlaws:access_key_id: mykeysecret_access_key: mysecretregion: us-east-1 # or <%= ENV['AWS_REGION'] %>receive_message:attributes:- receive_count- sent_atconcurrency: 25, # The number of allocated threads to process messages.Default 25delay: 25, # The delay in seconds to pause a queue when it'sempty. Default 0queues:- [myinstance-tosolve-priority, 2]- [myinstance-tosolve, 1]EOFcat <<EOF >/home/ec2-user/worker.rbclass MyWorkerinclude Shoryuken::Workershoryuken_options queue: 'myinstance-tosolve', auto_delete: truedef perform(sqs_msg, body)puts "normal: #{body}"%x[/usr/local/bin/fibsqs #{body}]endendclass MyFastWorkerinclude Shoryuken::Workershoryuken_options queue: 'myinstance-tosolve-priority', auto_delete:truedef perform(sqs_msg, body)puts "priority: #{body}"%x[/usr/local/bin/fibsqs #{body}]endendEOFchown ec2-user:ec2-user /home/ec2-user/worker.rb /home/ec2-user/config.ymlscreen -dm su - ec2-user -c 'shoryuken -r /home/ec2-user/worker.rb -C /home/ec2-user/config.yml' Next, create an auto scaling group that uses the launch configuration we just created. The rest of the details for the auto scaling group are as per your environment. However, set it to start with 0 instances and do not set it to receive traffic from a load balancer. Once the auto scaling group has been created, select it from the EC2 console and select Scaling Policies. From here, click Add Policy to create a policy similar to the one shown in the following screenshot and click Create: Next, we get to trigger the alarm. To do this, we will again submit random numbers into both the myinstance-tosolve and myinstance-tosolve-priority queues: [ec2-user@ip-10-203-10-79 ~]$ [[ -d ~/.aws ]] && rm -rf ~/.aws/config ||mkdir ~/.aws[ec2-user@ip-10-203-10-79 ~]$ echo $'[default]naws_access_key_id=mykeynaws_secret_access_key=mysecretnregion=us-east-1' > .aws/config[ec2-user@ip-10-203-10-79 ~]$ for i in {1..100}; dovalue=$(shuf -i 1-50 -n 1)aws sqs send-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-tosolve --message-body ${value} >/dev/null 2>&1done[ec2-user@ip-10-203-10-79 ~]$ for i in {1..100}; dovalue=$(shuf -i 1-50 -n 1)aws sqs send-message --queue-url https://queue.amazonaws.com/acctarn/myinstance-tosolvepriority--message-body ${value} >/dev/null 2>&1done After five minutes, the alarm will go into effect and our auto scaling group will launch an instance to respond to it. This can be viewed from the Scaling History tab for the auto scaling group in the EC2 console. Even though our alarm is set to trigger after one minute, CloudWatch only updates in intervals of five minutes. This is why our wait time was not as short as our alarm. Our auto scaling group has now responded to the alarm by launching an instance. Launching an instance by itself will not resolve this, but using the user data from the Launch Configuration, it should configure itself to clear out the queue, solve the fibonacci of the message, and finally submit it to the myinstance-solved queue. If this is successful, our myinstance-tosolve-priority queue should get emptied out. We can verify from the SQS console as before. And finally, our alarm in CloudWatch is back to an OK status. We are now stuck with the instance because we have not set any decrease policy. I won't cover this in detail, but to set it, we would create a new alarm that triggers when the message count is a lower number such as 0, and set the auto scaling group to decrease the instance count when that alarm is triggered. This would allow us to scale out when we are over the threshold, and scale in when we are under the threshold. This completes the final pattern for data processing. Summary In this article, in the queuing chain pattern, we walked through creating independent systems that use the Amazon-provided SQS service that solve fibonacci numbers without interacting with each other directly. Then, we took the topic even deeper in the job observer pattern, and covered how to tie in auto scaling policies and alarms from the CloudWatch service to scale out when the priority queue gets too deep. Resources for Article: Further resources on this subject: In the Cloud [Article] Mobile Administration [Article] Testing Your Recipes and Getting Started with ChefSpec [Article]
Read more
  • 0
  • 0
  • 12663

article-image-algorithmic-trading
Packt
29 Apr 2015
16 min read
Save for later

Algorithmic Trading

Packt
29 Apr 2015
16 min read
In this article by James Ma Weiming, author of the book Mastering Python for Finance , we will see how algorithmic trading automates the systematic trading process, where orders are executed at the best price possible based on a variety of factors, such as pricing, timing, and volume. Some brokerage firms may offer an application programming interface (API) as part of their service offering to customers who wish to deploy their own trading algorithms. For developing an algorithmic trading system, it must be highly robust and handle any point of failure during the order execution. Network configuration, hardware, memory management and speed, and user experience are some factors to be considered when designing a system in executing orders. Designing larger systems inevitably add complexity to the framework. As soon as a position in a market is opened, it is subjected to various types of risk, such as market risk. To preserve the trading capital as much as possible, it is important to incorporate risk management measures to the trading system. Perhaps the most common risk measure used in the financial industry is the value-at-risk (VaR) technique. We will discuss the beauty and flaws of VaR, and how it can be incorporated into our trading system that we will develop in this article. In this article, we will cover the following topics: An overview of algorithmic trading List of brokers and system vendors with public API Choosing a programming language for a trading system Setting up API access on Interactive Brokers (IB) trading platform Using the IbPy module to interact with IB Trader WorkStation (TWS) Introduction to algorithmic trading In the 1990s, exchanges had already begun to use electronic trading systems. By 1997, 44 exchanges worldwide used automated systems for trading futures and options with more exchanges in the process of developing automated technology. Exchanges such as the Chicago Board of Trade (CBOT) and the London International Financial Futures and Options Exchange (LIFFE) used their electronic trading systems as an after-hours complement to traditional open outcry trading in pits, giving traders 24-hour access to the exchange's risk management tools. With improvements in technology, technology-based trading became less expensive, fueling the growth of trading platforms that are faster and powerful. Higher reliability of order execution and lower rates of message transmission error deepened the reliance of technology by financial institutions. The majority of asset managers, proprietary traders, and market makers have since moved from the trading pits to electronic trading floors. As systematic or computerized trading became more commonplace, speed became the most important factor in determining the outcome of a trade. Quants utilizing sophisticated fundamental models are able to recompute fair values of trading products on the fly and execute trading decisions, enabling them to reap profits at the expense of fundamental traders using traditional tools. This gave way to the term high-frequency trading (HFT) that relies on fast computers to execute the trading decisions before anyone else can. HFT has evolved into a billion-dollar industry. Algorithmic trading refers to the automation of the systematic trading process, where the order execution is heavily optimized to give the best price possible. It is not part of the portfolio allocation process. Banks, hedge funds, brokerage firms, clearing firms, and trading firms typically have their servers placed right next to the electronic exchange to receive the latest market prices and to perform the fastest order execution where possible. They bring enormous trading volumes to the exchange. Anyone who wishes to participate in low-latency, high-volume trading activities, such as complex event processing or capturing fleeting price discrepancies, by acquiring exchange connectivity may do so in the form of co-location, where his or her server hardware can be placed on a rack right next to the exchange for a fee. The Financial Information Exchange (FIX) protocol is the industry standard for electronic communications with the exchange from the private server for direct market access (DMA) to real-time information. C++ is the common choice of programming language for trading over the FIX protocol, though other languages, such as .NET framework common language and Java can be used. Before creating an algorithmic trading platform, you would need to assess various factors, such as speed and ease of learning before deciding on a specific language for the purpose. Brokerage firms would provide a trading platform of some sort to their customers for them to execute orders on selected exchanges in return for the commission fees. Some brokerage firms may offer an API as part of their service offering to technically inclined customers who wish to run their own trading algorithms. In most circumstances, customers may also choose from a number of commercial trading platforms offered by third-party vendors. Some of these trading platforms may also offer API access to route orders electronically to the exchange. It is important to read the API documentation beforehand to understand the technical capabilities offered by your broker and to formulate an approach in developing an algorithmic trading system. List of trading platforms with public API The following table lists some brokers and trading platform vendors who have their API documentation publicly available: Broker/vendor URL Programming languages supported Interactive Brokers https://www.interactivebrokers.com/en/index.php?f=1325 C++, Posix C++, Java, and Visual Basic for ActiveX E*Trade https://developer.etrade.com Java, PHP, and C++ IG http://labs.ig.com/ REST, Java, FIX, and Microsoft .NET Framework 4.0 Tradier https://developer.tradier.com Java, Perl, Python, and Ruby TradeKing https://developers.tradeking.com Java, Node.js, PHP, R, and Ruby Cunningham trading systems http://www.ctsfutures.com/wiki/T4%20API%2040.MainPage.ashx Microsoft .NET Framework 4.0 CQG http://cqg.com/Products/CQG-API.aspx C#, C++, Excel, MATLAB, and VB.NET Trading technologies https://developer.tradingtechnologies.com Microsoft .NET Framework 4.0 OANDA http://developer.oanda.com REST, Java, FIX, and MT4 Which is the best programming language to use? With many choices of programming languages available to interface with brokers or vendors, the question that comes naturally to anyone starting out in algorithmic trading platform development is: which language should I use? Well, the short answer is that there is really no best programming language. How your product will be developed, the performance metrics to follow, the costs involved, latency threshold, risk measures, and the expected user interface are pieces of the puzzle to be taken into consideration. The risk manager, execution engine, and portfolio optimizer are some major components that will affect the design of your system. Your existing trading infrastructure, choice of operating system, programming language compiler capability, and available software tools poses further constraints on the system design, development, and deployment. System functionalities It is important to define the outcomes of your trading system. An outcome could be a research-based system that might be more concerned with obtaining high-quality data from data vendors, performing computations or running models, and evaluating a strategy through signal generation. Part of the research component might include a data-cleaning module or a backtesting interface to run a strategy with theoretical parameters over historical data. The CPU speed, memory size, and bandwidth are factors to be considered while designing our system. Another outcome could be an execution-based system that is more concerned with risk management and order handling features to ensure timely execution of multiple orders. The system must be highly robust and handle any point of failure during the order execution. As such, network configuration, hardware, memory management and speed, and user experience are some factors to be considered when designing a system in executing orders. A system may contain one or more of these functionalities. Designing larger systems inevitably add complexity to the framework. It is recommended that you choose one or more programming languages that can address and balance the development speed, ease of development, scalability, and reliability of your trading system. Algorithmic trading with Interactive Brokers and IbPy In this section, we will build a working algorithmic trading platform that will authenticate with Interactive Brokers (IB) and log in, retrieve the market data, and send orders. IB is one of the most popular brokers in the trading community and has a long history of API development. There are plenty of articles on the use of the API available on the Web. IB serves clients ranging from hedge funds to retail traders. Although the API does not support Python directly, Python wrappers such as IbPy are available to make the API calls to the IB interface. The IB API is unique to its own implementation, and every broker has its own API handling methods. Nevertheless, the documents and sample applications provided by your broker would demonstrate the core functionality of every API interface that can be easily integrated into an algorithmic trading system if designed properly. Getting Interactive Brokers' Trader WorkStation The official page for IB is https://www.interactivebrokers.com. Here, you can find a wealth of information regarding trading and investing for retail and institutional traders. In this section, we will take a look at how to get the Trader WorkStation X (TWS) installed and running on your local workstation before setting up an algorithmic trading system using Python. Note that we will perform simulated trading on a demonstration account. If your trading strategy turns out to be profitable, head to the OPEN AN ACCOUNT section of the IB website to open a live trading account. Rules, regulations, market data fees, exchange fees, commissions, and other conditions are subjected to the broker of your choice. In addition, market conditions are vastly different from the simulated environment. You are encouraged to perform extensive testing on your algorithmic trading system before running on live markets. The following key steps describe how to install TWS on your local workstation, log in to the demonstration account, and set it up for API use: From IB's official website, navigate to TRADING, and then select Standalone TWS. Choose the installation executable that is suitable for your local workstation. TWS runs on Java; therefore, ensure that Java runtime plugin is already installed on your local workstation. Refer to the following screenshot: When prompted during the installation process, choose Trader_WorkStation_X and IB Gateway options. The Trader WorkStation X (TWS) is the trading platform with full order management functionality. The IB Gateway program accepts and processes the API connections without any order management features of the TWS. We will not cover the use of the IB Gateway, but you may find it useful later. Select the destination directory on your local workstation where TWS will place all the required files, as shown in the following screenshot: When the installation is completed, a TWS shortcut icon will appear together with your list of installed applications. Double-click on the icon to start the TWS program. When TWS starts, you will be prompted to enter your login credentials. To log in to the demonstration account, type edemo in the username field and demouser in the password field, as shown in the following screenshot: Once we have managed to load our demo account on TWS, we can now set up its API functionality. On the toolbar, click on Configure: Under the Configuration tree, open the API node to reveal further options. Select Settings. Note that Socket port is 7496, and we added the IP address of our workstation housing our algorithmic trading system to the list of trusted IP addresses, which in this case is 127.0.0.1. Ensure that the Enable ActiveX and Socket Clients option is selected to allow the socket connections to TWS: Click on OK to save all the changes. TWS is now ready to accept orders and market data requests from our algorithmic trading system. Getting IbPy – the IB API wrapper IbPy is an add-on module for Python that wraps the IB API. It is open source and can be found at https://github.com/blampe/IbPy. Head to this URL and download the source files. Unzip the source folder, and use Terminal to navigate to this directory. Type python setup.py install to install IbPy as part of the Python runtime environment. The use of IbPy is similar to the API calls, as documented on the IB website. The documentation for IbPy is at https://code.google.com/p/ibpy/w/list. A simple order routing mechanism In this section, we will start interacting with TWS using Python by establishing a connection and sending out a market order to the exchange. Once IbPy is installed, import the following necessary modules into our Python script: from ib.ext.Contract import Contractfrom ib.ext.Order import Orderfrom ib.opt import Connection Next, implement the logging functions to handle calls from the server. The error_handler method is invoked whenever the API encounters an error, which is accompanied with a message. The server_handler method is dedicated to handle all the other forms of returned API messages. The msg variable is a type of an ib.opt.message object and references the method calls, as defined by the IB API EWrapper methods. The API documentation can be accessed at https://www.interactivebrokers.com/en/software/api/api.htm. The following is the Python code for the server_handler method: def error_handler(msg):print "Server Error:", msgdef server_handler(msg):print "Server Msg:", msg.typeName, "-", msg We will place a sample order of the stock AAPL. The contract specifications of the order are defined by the Contract class object found in the ib.ext.Contract module. We will create a method called create_contract that returns a new instance of this object: def create_contract(symbol, sec_type, exch, prim_exch, curr):contract = Contract()contract.m_symbol = symbolcontract.m_secType = sec_typecontract.m_exchange = exchcontract.m_primaryExch = prim_exchcontract.m_currency = currreturn contract The Order class object is used to place an order with TWS. Let's define a method called create_order that will return a new instance of the object: def create_order(order_type, quantity, action):order = Order()order.m_orderType = order_typeorder.m_totalQuantity = quantityorder.m_action = actionreturn order After the required methods are created, we can then begin to script the main functionality. Let's initialize the required variables: if __name__ == "__main__":client_id = 100order_id = 1port = 7496tws_conn = None Note that the client_id variable is our assigned integer that identifies the instance of the client communicating with TWS. The order_id variable is our assigned integer that identifies the order queue number sent to TWS. Each new order requires this value to be incremented sequentially. The port number has the same value as defined in our API settings of TWS earlier. The tws_conn variable holds the connection value to TWS. Let's initialize this variable with an empty value for now. Let's use a try block that encapsulates the Connection.create method to handle the socket connections to TWS in a graceful manner: try:# Establish connection to TWS.tws_conn = Connection.create(port=port,clientId=client_id)tws_conn.connect()# Assign error handling function.tws_conn.register(error_handler, 'Error')# Assign server messages handling function.tws_conn.registerAll(server_handler)finally:# Disconnect from TWSif tws_conn is not None:tws_conn.disconnect() The port and clientId parameter fields define this connection. After the connection instance is created, the connect method will try to connect to TWS. When the connection to TWS has successfully opened, it is time to register listeners to receive notifications from the server. The register method associates a function handler to a particular event. The registerAll method associates a handler to all the messages generated. This is where the error_handler and server_handler methods declared earlier will be used for this occasion. Before sending our very first order of 100 shares of AAPL to the exchange, we will call the create_contract method to create a new contract object for AAPL. Then, we will call the create_order method to create a new Order object, to go long 100 shares. Finally, we will call the placeOrder method of the Connection class to send out this order to TWS: # Create a contract for AAPL stock using SMART order routing.aapl_contract = create_contract('AAPL','STK','SMART','SMART','USD')# Go long 100 shares of AAPLaapl_order = create_order('MKT', 100, 'BUY')# Place order on IB TWS.tws_conn.placeOrder(order_id, aapl_contract, aapl_order) That's it! Let's run our Python script. We should get a similar output as follows: Server Error: <error id=-1, errorCode=2104, errorMsg=Market data farmconnection is OK:ibdemo>Server Response: error, <error id=-1, errorCode=2104, errorMsg=Marketdata farm connection is OK:ibdemo>Server Version: 75TWS Time at connection:20141210 23:14:17 CSTServer Msg: managedAccounts - <managedAccounts accountsList=DU15200>Server Msg: nextValidId - <nextValidId orderId=1>Server Error: <error id=-1, errorCode=2104, errorMsg=Market data farmconnection is OK:ibdemo>Server Msg: error - <error id=-1, errorCode=2104, errorMsg=Market datafarm connection is OK:ibdemo>Server Error: <error id=-1, errorCode=2107, errorMsg=HMDS data farmconnection is inactive but should be available upon demand.demohmds>Server Msg: error - <error id=-1, errorCode=2107, errorMsg=HMDS data farmconnection is inactive but should be available upon demand.demohmds> Basically, what the error messages say is that there are no errors and the connections are OK. Should the simulated order be executed successfully during market trading hours, the trade will be reflected in TWS: The full source code of our implementation is given as follows: """ A Simple Order Routing Mechanism """from ib.ext.Contract import Contractfrom ib.ext.Order import Orderfrom ib.opt import Connectiondef error_handler(msg):print "Server Error:", msgdef server_handler(msg):print "Server Msg:", msg.typeName, "-", msgdef create_contract(symbol, sec_type, exch, prim_exch, curr):contract = Contract()contract.m_symbol = symbolcontract.m_secType = sec_typecontract.m_exchange = exchcontract.m_primaryExch = prim_exchcontract.m_currency = currreturn contractdef create_order(order_type, quantity, action):order = Order()order.m_orderType = order_typeorder.m_totalQuantity = quantityorder.m_action = actionreturn orderif __name__ == "__main__":client_id = 1order_id = 119port = 7496tws_conn = Nonetry:# Establish connection to TWS.tws_conn = Connection.create(port=port,clientId=client_id)tws_conn.connect()# Assign error handling function.tws_conn.register(error_handler, 'Error')# Assign server messages handling function.tws_conn.registerAll(server_handler)# Create AAPL contract and send orderaapl_contract = create_contract('AAPL','STK','SMART','SMART','USD')# Go long 100 shares of AAPLaapl_order = create_order('MKT', 100, 'BUY')# Place order on IB TWS.tws_conn.placeOrder(order_id, aapl_contract, aapl_order)finally:# Disconnect from TWSif tws_conn is not None:tws_conn.disconnect() Summary In this article, we were introduced to the evolution of trading from the pits to the electronic trading platform, and learned how algorithmic trading came about. We looked at some brokers offering API access to their trading service offering. To help us get started on our journey in developing an algorithmic trading system, we used the TWS of IB and the IbPy Python module. In our first trading program, we successfully sent an order to our broker through the TWS API using a demonstration account. Resources for Article: Prototyping Arduino Projects using Python Python functions – Avoid repeating code Pentesting Using Python
Read more
  • 0
  • 0
  • 13995

article-image-hacking-raspberry-pi-project-understand-electronics-first
Packt
29 Apr 2015
20 min read
Save for later

Hacking a Raspberry Pi project? Understand electronics first!

Packt
29 Apr 2015
20 min read
In this article, by Rushi Gajjar, author of the book Raspberry Pi Sensors, you will see the basic requirements needed for building the RasPi projects. You can't spend even a day without electronics, can you? Electronics is everywhere, from your toothbrush to cars and in aircrafts and spaceships too. This article will help you understand the concepts of electronics that can be very useful while working with the RasPi. You might have read many electronics-related books, and they might have bored you with concepts when you really wanted to create or build projects. I believe that there must be a reason for explanations being given about electronics and its applications. Once you know about the electronics, we will walk through the communication protocols and their uses with respect to communication among electronic components and different techniques to do it. Useful tips and precautions are listed before starting to work with GPIOs on the RasPi. Then, you will understand the functionalities of GPIO and blink the LED using shell, Python, and C code. Let's cover some of the fundamentals of electronics. (For more resources related to this topic, see here.) Basic terminologies of electronics There are numerous terminologies used in the world of electronics. From the hardware to the software, there are millions of concepts that are used to create astonishing products and projects. You already know that the RasPi is a single-board computer that contains plentiful electronic components built in, which makes us very comfortable to control and interface the different electronic devices connected through its GPIO port. In general, when we talk about electronics, it is just the hardware or a circuit made up of several Integrated Circuits (ICs) with different resistors, capacitors, inductors, and many more components. But that is not always the case; when we build our hardware with programmable ICs, we also need to take care of internal programming (the software). For example, in a microcontroller or microprocessor, or even in the RasPi's case, we can feed the program (technically, permanently burn/dump the programs) into the ICs so that when the IC is powered up, it follows the steps written in the program and behaves the way we want. This is how robots, your washing machines, and other home appliances work. All of these appliances have different design complexities, which depends on their application. There are some functions, which can be performed by both software and hardware. The designer has to analyze the trade-off by experimenting on both; for example, the decoder function can be written in the software and can also be implemented on the hardware by connecting logical ICs. The developer has to analyze the speed, size (in both the hardware and the software), complexity, and many more parameters to design these kinds of functions. The point of discussing these theories is to get an idea on how complex electronics can be. It is very important for you to know these terminologies because you will need them frequently while building the RasPi projects. Voltage Who discovered voltage? Okay, that's not important now, let's understand it first. The basic concept follows the physics behind the flow of water. Water can flow in two ways; one is a waterfall (for example, from a mountain top to the ground) and the second is forceful flow using a water pump. The concept behind understanding voltage is similar. Voltage is the potential difference between two points, which means that a voltage difference allows the flow of charges (electrons) from the higher potential to the lower potential. To understand the preceding example, consider lightning, which can be compared to a waterfall, and batteries, which can be compared to a water pump. When batteries are connected to a circuit, chemical reactions within them pump the flow of charges from the positive terminal to the negative terminal. Voltage is always mentioned in volts (V). The AA battery cell usually supplies 3V. By the way, the term voltage was named after the great scientist Alessandro Volta, who invented the voltaic cell, which was then known as a battery cell. Current Current is the flow of charges (electrons). Whenever a voltage difference is created, it causes current to flow in a fixed direction from the positive (higher) terminal to the negative (lower) terminal (known as conventional current). Current is measured in amperes (A). The electron current flows from the negative terminal of the battery to the positive terminal. To prevent confusion, we will follow the conventional current, which is from the positive terminal to the negative terminal of the battery or the source. Resistor The meaning of the word "resist" in the Oxford dictionary is "to try to stop or to prevent." As the definition says, a resistor simply prevents the flow of current. When current flows through a resistor, there is a voltage drop in it. This drop directly depends on the amount of current flowing through resistor and value of the resistance. There is a formula used to calculate the amount of voltage drop across the resistor (or in the circuit), which is also called as the Ohm's law (V = I * R). Resistance is measured in ohms (Ω). Let's see how resistance is calculated with this example: if the resistance is 10Ω and the current flowing from the resistor is 1A, then the voltage drop across the resistor is 10V. Here is another example: when we connect LEDs on a 5V supply, we connect a 330Ω resistor in series with the LEDs to prevent blow-off of the LEDs due to excessive current. The resistor drops some voltage in it and safeguards the LEDs. We will extensively use resistors to develop our projects. Capacitor A resistor dissipates energy in the form of heat. In contrast to that, a capacitor stores energy between its two conductive plates. Often, capacitors are used to filter voltage supplied in filter circuits and to generate clear voice in amplifier circuits. Explaining the concept of capacitance will be too hefty for this article, so let me come to the main point: when we have batteries to store energy, why do we need to use capacitors in our circuits? There are several benefits of using a capacitor in a circuit. Many books will tell you that it acts as a filter or a surge suppressor, and they will use terms such as power smoothing, decoupling, DC blocking, and so on. In our applications, when we use capacitors with sensors, they hold the voltage level for some time so that the microprocessor has enough time to read that voltage value. The sensor's data varies a lot. It needs to be stable as long as a microprocessor is reading that value to avoid erroneous calculations. The holding time of a capacitor depends on an RC time constant, which will be explained when we will actually use it. Open circuit and short circuit Now, there is an interesting point to note: when there is voltage available on the terminal but no components are connected across the terminals, there is no current flow, which is often called an open circuit. In contrast, when two terminals are connected, with or without a component, and charge is allowed to flow, it's called a short circuit, connected circuit, or closed circuit. Here's a warning for you: do not short (directly connect) the two terminals of a power supply such as batteries, adaptors, and chargers. This may cause serious damages, which include fire damage and component failure. If we connect a conducting wire with no resistance, let's see what Ohm's law results in: R = 0Ω then I = V/0, so I = ∞A. In theory, this is called infinite (uncountable), and practically, it means a fire or a blast! Series and parallel connections In electrical theory, when the current flowing through a component does not divide into paths, it's a series connection. Also, if the current flowing through each component is the same then those components are said to be in series. If the voltage across all the components is the same, then the connection is said to be in parallel. In a circuit, there can be combination of series and parallel connections. Therefore, a circuit may not be purely a series or a parallel circuit. Let's study the circuits shown in the following diagram: Series and parallel connections At the first glance, this figure looks complex with many notations, but let's look at each component separately. The figure on the left is a series connection of components. The battery supplies voltage (V) and current (I). The direction of the current flow is shown as clockwise. As explained, in a series connection, the current flowing through every component is the same, but the voltage values across all the components are different. Hence, V = V1 + V2 + V3. For example, if the battery supplies 12V, then the voltage across each resistor is 4V. The current flowing through each resistor is 4 mA (because V = IR and R = R1 + R2 + R3 = 3K). The figure on the right represents a parallel connection. Here, each of the components gets the same voltage but the current is divided into different paths. The current flowing from the positive terminal of the battery is I, which is divided into I1 and I2. When I1 flows to the next node, it is again divided into two parts and flown through R5 and R6. Therefore, in a parallel circuit, I = I1 + I2. The voltage remains the same across all the resistors. For example, if the battery supplies 12V, the voltage across all the resistors is 12V but the current through all the resistors will be different. In the parallel connection example, the current flown through each circuit can be calculated by applying the equations of current division. Give it a try to calculate! When there is a combination of series and parallel circuits, it needs more calculations and analysis. Kirchhoff's laws, nodes, and mesh equations can be used to solve such kinds of circuits. All of that is too complex to explain in this article; you can refer any standard circuits-theory-related books and gain expertise in it. Kirchhoff's current law: At any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node. Kirchhoff's voltage law: The directed sum of the electrical potential differences (voltage) around any closed network is zero. Pull-up and pull-down resistors Pull-up and pull-down resistors are one of the important terminologies in electronic systems design. As the title says, there are two types of pulling resistors: pull-up and pull-down. Both have the same functionality, but the difference is that pull-up resistor pulls the terminal to the voltage supplied and the pull-down resistor pulls the terminal to the ground or the common line. The significance of connecting a pulling resistor to a node or terminal is to bring back the logic level to the default value when no input is present on that particular terminal. The benefit of including a pull-up or pull-down resistor is that it makes the circuitry susceptible to noise, and the logic level (1 or 0) cannot be changed from a small variation in terms of voltages (due to noise) on the terminal. Let's take a look at the example shown in the following figure. It shows a pull-up example with a NOT gate (a NOT gate gives inverted output in its OUT terminal; therefore, if logic one is the input, the output is logic zero). We will consider the effects with and without the pull-up resistor. The same is true for the pull-down resistor. Connection with and without pull-up resistors In general, logic gates have high impedance at their input terminal, so when there is no connection on the input terminal, it is termed as floating. Now, in the preceding figure, the leftmost connection is not recommended because when the switch is open (OFF state), it leaves the input terminal floating and any noise can change the input state of the NOT gate. The reason of the noise can be any. Even the open terminals can act as an antenna and can create noise on the pin of the NOT gate. The circuit shown in the middle is a pull-up circuit without a resistor and it is highly recommended not to use it. This kind of connection can be called a pull-up but should never be used. When the switch is closed (ON state), the VCC gets a direct path to the ground, which is the same as a short circuit. A large amount of current will flow from VCC to ground, and this can damage your circuit. The rightmost figure shows the best way to pull up because there is a resistor in which some voltage drop will occur. When the switch is open, the terminal of the NOT gate will be floated to the VCC (pulled up), which is the default. When the switch is closed, the input terminal of the NOT gate will be connected to the ground and it will experience the logic zero state. The current flowing through the resistor will be nominal this time. For example, if VCC = 5V, R7 = 1K, and I = V/R, then I = 5mA, which is in the safe region. For the pull-down circuit example, there can be an interchange between the switch and a resistor. The resistor will be connected between the ground and the input terminal of the NOT gate. When using sensors and ICs, keep in mind that if there is a notation of using pull-ups or pull-downs in datasheets or technical manuals, it is recommended to use them wherever needed. Communication protocols It has been a lot theory so far. There can be numerous components, including ICs and digital sensors, as peripherals of a microprocessor. There can be a large amount of data with the peripheral devices, and there might be a need to send it to the processor. How do they communicate? How does the processor understand that the data is coming into it and that it is being sent by the sensor? There is a serial, or parallel, data-line connection between ICs and a microprocessor. Parallel connections are faster than the serial one but are less preferred because they require more lines, for example, 8, 16, or more than that. A PCI bus can be an example of a parallel communication. Usually in a complex or high-density circuit, the processor is connected to many peripherals, and in that case, we cannot have that many free pins/lines to connect an additional single IC. Serial communication requires up to four lines, depending on the protocol used. Still, it cannot be said that serial communication is better than parallel, but serial is preferred when low pin counts come into the picture. In serial communication, data is sent over frames or packets. Large data is broken into chunks and sent over the lines by a frame or a packet. Now, what is a protocol? A protocol is a set of rules that need to be followed while interfacing the ICs to the microprocessor, and it's not limited to the connection. The protocol also defines the data frame structures, frame lengths, voltage levels, data types, data rates, and so on. There are many standard serial protocols such as UART, FireWire, Ethernet, SPI, I2C, and more. The RasPi 1 models B, A+, B+, and the RasPi 2 model B have one SPI pin, one I2C pin, and one UART pin available on the expansion port. We will see these protocols one by one. UART UART is a very common interface, or protocol, that is found in almost every PC or microprocessor. UART is the abbreviated form of Universal Asynchronous Receiver and Transmitter. This is also known as the RS-232 standard. This protocol is full-duplex and a complete standard, including electrical, mechanical, and physical characteristics for a particular instance of communication. When data is sent over a bus, the data levels need to be changed to suit the RS-232 bus levels. Varying voltages are sent by a transmitter on a bus. A voltage value greater than 3V is logic zero, while a voltage value less than -3V is logic one. Values between -3V to 3V are called as undefined states. The microprocessor sends the data to the transistor-transistor logic (TTL) level; when we send them to the bus, the voltage levels should be increased to the RS-232 standard. This means that to convert voltage from logic levels of a microprocessor (0V and 5V) to these levels and back, we need a level shifter IC such as MAX232. The data is sent through a DB9 connector and an RS-232 cable. Level shifting is useful when we communicate over a long distance. What happens when we need to connect without these additional level shifter ICs? This connection is called a NULL connection, as shown in the following figure. It can be observed that the transmit and receive pins of a transmitter are cross-connected, and the ground pins are shared. This can be useful in short-distance communication. In UART, it is very important that the baud rates (symbols transferred per second) should match between the transmitter and the receiver. Most of the time, we will be using 9600 or 115200 as the baud rates. The typical frame of UART communication consists of a start bit (usually 0, which tells receiver that the data stream is about to start), data (generally 8 bit), and a stop bit (usually 1, which tells receiver that the transmission is over). Null UART connection The following figure represents the UART pins on the GPIO header of the RasPi board. Pin 8 and 10 on the RasPi GPIO pin header are transmit and receive pins respectively. Many sensors do have the UART communication protocol enabled on their output pins. Sensors such as gas sensors (MQ-2) use UART communication to communicate with the RasPi. Another sensor that works on UART is the nine-axis motion sensor from LP Research (LPMS-UARTL), which allows you to make quadcopters on your own by providing a three-axis gyroscope, three-axis magnetometer, and three-axis accelerometer. The TMP104 sensor from Texas instruments comes with UART interface digital temperature sensors. Here, the UART allows daisy-chain topology (in which you connect one's transmit to the receive of the second, the second's transmit to the third's receive, and so on up to eight sensors). In a RasPi, there should be a written application program with the UART driver in the Python or C language to obtain the data coming from a sensor. Serial Peripheral Interface The Serial Peripheral Interface (SPI) is a full-duplex, short-distance, and single-master protocol. Unlike UART, it is a synchronous communication protocol. One of the simple connections can be the single master-slave connection, which is shown in the next figure. There are usually four wires in total, which are clock, Master In Slave Out (MISO), Master Out Slave In (MOSI), and chip select (CS). Have a look at the following image: Simple master-slave SPI connections The master always initiates the data frame and clock. The clock frequencies can be varied from the master according to the slave's performance and capabilities. The clock frequency varies from 1 MHz to 40 MHz, and higher too. Some slave devices trigger on active low input, which means that whenever the logic zero signal is given by the master to slave on the CS pin, the slave chip is turned ON. Then it accepts the clock and data from master. There can be multiple slaves connected to a master device. To connect multiple slaves, we need additional CS lines from the master to be connected with the slaves. This can be one of the disadvantages of the SPI communication protocol, when slaves are increased. There is no slave acknowledgement sent to the master, so the master sends data without knowing whether the slave has received it or not. If both the master and the slave are programmable, then during runtime (while executing the program), the master and slave actions can be interchanged. For the RasPi, we can easily write the SPI communication code in either Python or C. The location of the SPI pins on RasPi 1 models A+ and B+ and RasPi 2 model B can be seen in the following diagram. This diagram is still valid for RasPi 1 model B: Inter-Integrated Circuit Inter-Integrated Circuit (I2C) is a protocol that works with two wires and it is a half-duplex (a type of communication where whenever the sender sends the command, the receiver just listens and cannot transmit anything; and vice versa), multimaster protocol that requires only two wires, known as data (SDA) and clock (SCL). The I2C protocol is patented by Philips, and whenever an IC manufacturer wants to include I2C in their chip, they need a license. Many of the ICs and peripherals around us are integrated with the I2C communication protocol. The lines of I2C (SDA and SCL) are always pulled up via resistors to the input voltage. The I2C bus works in three speeds: high speed (3.4 MBps), fast (400 KBps), and slow (less than 100 KBps). It is heard that the I2C communication is done up to 45 feet, but it's better to keep it under 10 feet. Each I2C device has an address of 7 to 10 bits; using this address, the master can always connect and send data meant for that particular slave. The slave device manufacturer provides you with the address to use when you are interfacing the device with the master. Data is received at every slave, but only that slave can take the data for which it is made. Using the address, the master reads the data available in the predefined data registers in the sensors, and processes it on its own. The general setup of an I2C bus configuration can be done as shown in the following diagram: I2C bus interface There are 16 x 2 character LCD modules available with the I2C interface in stores; you can just use them and program the RasPi accordingly. Usually, the LCD requires 8/4 wire parallel data bits, reset, read/write, and enable pins. The I2C pins are represented in the following image, and they can be located in the same place on all the RasPi models: The I2C protocol is the most widely used protocol among all when we talk about sensor interfacing. Silicon Labs' Si1141 is a proximity and brightness sensor that is nowadays used in mobile phones to provide the auto-brightness and near proximity features. You can purchase it and easily interface it with the RasPi. SHT20 from Sensirion also comes with the I2C protocol, and it can be used to measure temperature and humidity data. Stepper motor control can be done using I2C-based controllers, which can be interfaced with the RasPi. The most amazing thing is that if you have all of these sensors, then you can tie them to a single I2C, but with RasPi you can get the data! The modules with the I2C interface are available for low-pin-count devices. This is why serial communication is useful. These protocols are mostly used with the RasPi. The information given here about them is not that detailed, as numerous pages can be written on these protocols, but while programming the RasPi, this much information can help you build the projects. Summary In this article, you understood the electronics fundamentals that are really going to help you go ahead with a bit more complex projects. It was not all about electronics, but about all the essential concepts that are needed to build the RasPi projects. After covering the concepts of electronics, we took a dive into the communication protocols; it was interesting to know how the electronic devices work together. You learned that just as humans talk to each other in a common language, they also talk to each other using a common protocol. Resources for Article: Further resources on this subject: Testing Your Speed [article] Creating a 3D world to roam in [article] Webcam and Video Wizardry [article]
Read more
  • 0
  • 0
  • 4427

article-image-auto-updating-child-records-process-builder
Packt
29 Apr 2015
5 min read
Save for later

Auto updating child records in Process Builder

Packt
29 Apr 2015
5 min read
In this article by Rakesh Gupta, the author of the book Learning Salesforce Visual Workflow, we will discuss how to auto update child records using Process Builder of Salesforce. There are several business use cases where a customer wants to update child records based on some criteria, for example, auto-updating all related Opportunity to Closed-Lost if an account is updated to Inactive. To achieve these types of business requirements, you can use the Apex trigger. You can also achieve these types of requirements using the following methods: Process Builder A combination of Flow and Process Builder A combination of Flow and Inline Visualforce page on the account detail page (For more resources related to this topic, see here.) We will use Process Builder to solve these types of business requirements. Let's start with a business requirement. Here is a business scenario: Alice Atwood is working as a system administrator in Universal Container. She has received a requirement that once an account gets activated, the account phone must be synced with the related contact asst. phone field. This means whenever an account phone fields gets updated, the same phone number will be copied to the related contacts asst. phone field. Follow these instructions to achieve the preceding requirement using Process Builder: First of all, navigate to Setup | Build | Customize | Accounts | Fields and make sure that the Active picklist is available in your Salesforce organization. If it's not available, create a custom Picklist field with the name as Active, and enter the Yes and No values. To create a Process, navigate to Setup | Build | Create | Workflow & Approvals | Process Builder, click on New Button, and enter the following details: Name: Enter the name of the Process. Enter Update Contacts Asst Phone in Name. This must be within 255 characters. API Name: This will be autopopulated based on the name. Description: Write some meaningful text so that other developers or administrators can easily understand why this Process is created. The properties window will appear as shown in the following screenshot: Once you are done, click on the Save button. It will redirect you to the Process canvas, which allows you to create or modify the Process. After Define Process Properties, the next task is to select the object on which you want to create a Process and define the evaluation criteria. For this, click on the Add Object node. It will open an additional window on the right side of the Process canvas screen, where you have to enter the following details: Object: Start typing and then select the Account object. Start the process: For Start the process, select when a record is created or edited. This means the Process will fire every time, irrespective of record creation or updating. Allow process to evaluate a record multiple times in a single transaction?: Select this checkbox only when you want the Process to evaluate the same record up to five times in a single transaction. It might re-examine the record because a Process, Workflow Rule, or Flow may have updated the record in the same transaction. In this case, leave this unchecked. This window will appear as shown in the following screenshot: Once you are done with adding the Process criteria, click on the Save button. Similar to the Workflow Rule, once you save the panel, it doesn't allow you to change the selected object. After defining the evaluation criteria, the next step is to add the Process criteria. Once the Process criteria are true, only then will the Process execute the associated actions. To define the Process criteria, click on the Add Criteria node. It will open an additional window on the right side of the Process canvas screen, where you have to enter the following details: Criteria Name: Enter a name for the criteria node. Enter Update Contacts in Criteria Name. Criteria for Executing Actions: Select the type of criteria you want to define. You can use either a formula or a filter to define the Process criteria or no criteria. In this case, select Active equals to Yes. This means the Process will fire only when the account is active. This window will appear as shown in the following screenshot: Once you are done with defining the Process criteria, click on the Save button. Once you are done with the Process criteria node, the next step is to add an immediate action to update the related contact's asst. phone field. For this, we will use the Update Records action available under Process. Click on Add Action available under IMMEDIATE ACTIONS. It will open an additional window on the right side of the Process canvas screen, where you have to enter the following details: Action Type: Select the type of action. In this case, select Update Records. Action Name: Enter a name for this action. Enter Update Assts Phone in Action Name. Object: Start typing and then select the [Account].Contacts object. Field: Map the Asst. Phone field with the [Account]. Phone field. To select the fields, you can use field picker. To enter the value, use the text entry field. It will appear as shown in the following screenshot: Once you are done, click on the Save button. Once you are done with the immediate action, the final step is to activate it. To activate a Process, click on the Activate button available on the button bar. From now on, if you try to update an active account, Process will automatically update the related contact's asst. phone with the value available in the account phone field. Summary In this article, we have learned the technique of auto updating records in Process Builder. Resources for Article: Further resources on this subject: Visualforce Development with Apex [Article] Configuration in Salesforce CRM [Article] Introducing Salesforce Chatter [Article]
Read more
  • 0
  • 0
  • 8135

article-image-raspberry-pi-and-1-wire
Packt
28 Apr 2015
13 min read
Save for later

Raspberry Pi and 1-Wire

Packt
28 Apr 2015
13 min read
In this article by Jack Creasey, author of Raspberry Pi Essentials, we will learn about the remote input/output technology and devices that can be used with the Raspberry Pi. We will also specifically learn about 1-wire, and how it can be interfaced with the Raspberry Pi. The concept of remote I/O has its limitations, for example, it requires locating the Pi where the interface work needs to be done—it can work well for many projects. However, it can be a pain to power the Pi in remote locations where you need the I/O to occur. The most obvious power solutions are: Battery-powered systems and, perhaps, solar cells to keep the unit functional over longer periods of time Power over Ethernet (POE), which provides data connection and power over the same Ethernet cable which achieved up to 100 meters, without the use of a repeater. AC/DC power supply where a local supply is available Connecting to Wi-Fi could also be a potential but problematic solution because attenuation through walls impacts reception and distance. Many projects run a headless and remote Pi to allow locating it closer to the data acquisition point. This strategy may require yet another computer system to provide the Human Machine Interface (HMI) to control a remote Raspberry Pi. (For more resources related to this topic, see here.) Remote I/O I’d like to introduce you to a very mature I/O bus as a possibility for some of your Raspberry Pi projects; it’s not fast, but it’s simple to use and can be exceptionally flexible. It is called 1-Wire, and it uses endpoint interface chips that require only two wires (a data/clock line and ground), and they are line powered apart from possessing a few advanced functionality devices. The data rate is usually 16 kbps and the 1-Wire single master driver will handle distances up to approximately 200 meters on simple telephone wire. The system was developed by Dallas Semiconductor back in 1990, and the technology is now owned by Maxim. I have a few 1-wire iButton memory chips from 1994 that still work just fine. While you can get 1-Wire products today that are supplied as surface mount chips, 1-Wire products really started with the practically indestructible iButtons. These consist of a stainless steel coin very similar to the small CR2032 coin batteries in common use today. They come in 3 mm and 6 mm thicknesses and can be attached to a key ring carrier. I’ll cover a Raspberry Pi installation to read these iButtons in this article. The following image shows the dimensions for the iButton, the key ring carriers, and some available reader contacts: The 1-Wire protocol The master provides all the timing and power when addressing and transferring data to and from 1-Wire devices. A 1-Wire bus looks like this: When the master is not driving the bus, it’s pulled high by a resistor, and all the connected devices have an internal capacitor, which allows them to store energy. When the master pulls the bus low to send data bits, the bus devices use their internal energy store just like a battery, which allows them to sense inbound data, and to drive the bus low when they need to return data. The following typical block diagram shows the internal structure of a 1-Wire device and the range of functions it could provide: There are lots of data sheets on the 1-Wire devices produced by Maxim, Microchip, and other processor manufacturers. It’s fun to go back to the 1989 patent (now expired) by Dallas and see how it was originally conceived (http://www.google.com/patents/US5210846). Another great resource to learn the protocol details is at http://pdfserv.maximintegrated.com/en/an/AN937.pdf. To look at a range of devices, go to http://www.maximintegrated.com/en/products/comms/one-wire.html. For now, all you need to know is that all the 1-Wire devices have a basic serial number capability that is used to identify and talk to a given device. This silicon serial number is globally unique. The initial transactions with a device involve reading a 64-bit data structure that contains a 1-byte family code (device type identifier), a 6-byte globally unique device serial number, and a 1-byte CRC field, as shown in the following diagram: The bus master reads the family code and serial number of each device on the bus and uses it to talk to individual devices when required. Raspberry Pi interface to 1-Wire There are three primary ways to interface to the 1-Wire protocol devices on the Raspberry Pi: W1-gpio kernel: This module provides bit bashing of a GPIO port to support the 1-Wire protocol. Because this module is not recommended for multidrop 1-Wire Microlans, we will not consider it further. DS9490R USB Busmaster interface: This is used in a commercial 1-Wire reader supplied by Maxim (there are third-party copies too) and will function on most desktop and laptop systems as well as the Raspberry Pi. For further information on this device, go to http://datasheets.maximintegrated.com/en/ds/DS9490-DS9490R.pdf. DS2482 I2C Busmaster interface: This is used in many commercial solutions for 1-Wire. Typically, the boards are somewhat unique since they are built for particular microcomputer versions. For example, there are variants produced for the Raspberry Pi and for Arduino. For further reading on these devices, go to http://www.maximintegrated.com/en/app-notes/index.mvp/id/3684. I chose a unique Raspberry Pi solution from AB Electronics based on the I2C 1-Wire DS2482-100 bridge. The following image shows the 1-Wire board with an RJ11 connector for the 1-Wire bus and the buffered 5V I2C connector pins shown next to it: For the older 26-pin GPIO header, go to https://www.abelectronics.co.uk/products/3/Raspberry-Pi/27/1-Wire-Pi, and for the newer 40-pin header, go to https://www.abelectronics.co.uk/products/17/Raspberry-Pi--Raspberry-Pi-2-Model-B/60/1-Wire-Pi-Plus. This board is a superb implementation (IMHO) with ESD protection for the 1-Wire bus and a built-in level translator for 3.3-5V I2C buffered output available on a separate connector. Address pins are provided, so you could install more boards to support multiple isolated 1-Wire Microlan cables. There is just one thing that is not great in the board—they could have provided one or two iButton holders instead of the prototyping area. The schematic for the interface is shown in the following diagram: 1-Wire software for the Raspberry Pi The OWFS package supports reading and writing to 1-Wire devices over USB, I2C, and serial connection interfaces. It will also support the USB-connected interface bridge, the I2C interface bridge, or both. Before we install the OWFS package, let’s ensure that I2C works correctly so that we can attach the board to the Pi's motherboard. The following are the steps for the 1-Wire software installation on the Raspberry Pi. Start the raspi-config utility from the command line: sudo raspi-config Select Advanced Options, and then I2C: Select Yes to enable I2C and then click on OK. Select Yes to load the kernel module and then click on OK. Lastly, select Finish and reboot the Pi for the settings to take effect. If you are using an early raspi-config (you don’t have the aforementioned options) you may have to do the following: Enter the sudo nano /etc/modprobe.d/raspi-blacklist.conf command. Delete or comment out the line: blacklist i2c-bcm2708 Save the file. Edit the modules loaded using the following command: sudo nano /etc/modules Once you have the editor open, perform the following steps: Add the line i2c-dev in its own row. Save the file. Update your system using sudo apt-get update, and sudo apt-get upgrade. Install the i2c-tools using sudo apt-get install –y i2c-tools. Lastly, power down the Pi and attach the 1-Wire board. If you power on the Pi again, you will be ready to test the board functionality and install the OWFS package: Now, let’s check that I2C is working and the 1-Wire board is connected: From the command line, type i2cdetect –l. This command will print out the detected I2C bus; this will usually be i2c-1, but on some early Pis, it may be i2c-0. From the command line, type sudo i2cdetect –y 1.This command will print out the results of an i2C bus scan. You should have a device 0x18 in the listing as shown in the following screenshot; this is the default bridge adapter address. Finally, let's install OWFS: Install OWFS using the following command: sudo apt-get install –y owfs When the install process ends, the OWFS tasks are started, and they will restart automatically each time you reboot the Raspberry Pi. When OWFS starts, to get its startup settings, it reads a configuration file—/etc/owfs.conf. We will edit this file soon to reconfigure the settings. Start Task Manager, and you will see the OWFS processes as shown in the following screenshot; there are three processes, which are owserver, owhttpd, and owftpd: The default configuration file for OWFS uses fake devices, so you don’t need any hardware attached at this stage. We can observe the method to access an owhttpd server by simply using the web browser. By default, the HTTP daemon is set to the localhost:2121 address, as shown in the following screenshot: You will notice that two fake devices are shown on the web page, and the numerical identities use the naming convention xx.yyyyyyyyyyyy. These are hex digits representing x as the device family and y as the serial number. You can also examine the details of the information for each device and see the structure. For example, the xx=10 device is a temperature sensor (DS18S20), and its internal pages show the current temperature ranges. You can find details of the various 1-Wire devices by following the link to the OWFS home page at the top of the web page. Let’s now reconfigure OWFS to address devices on the hardware bridge board we installed: Edit the OWFS configuration file using the following command: sudo nano /etc/owfs.conf Once the editor is open: Comment out the server: FAKE device line. Comment out the ftp: line. Add the line: server: i2c = /dev/i2c-1:0. Save the file. Since we only need bare minimum information in the owfs.conf file, the following minimized file content will work: ######################## SOURCES ######################## # # With this setup, any client (but owserver) uses owserver on the # local machine... # ! server: server = localhost:4304 # # I2C device: DS2482-100 or DS2482-800 # server: i2c = /dev/i2c-1:0 # ####################### OWHTTPD #########################   http: port = 2121   ####################### OWSERVER ########################   server: port = localhost:4304 You will find that it’s worth saving the original file from the installation by renaming it and then creating your own minimized file as shown in the preceding code Once you have the owfs.conf file updated, you can reboot the Raspberry Pi and the new settings will be used. You should have only the owserver and owhttpd processes running now, and the localhost:2121 web page should show only the devices on the single 1-Wire net that you have connected to your board. The owhttpd server can of course be addressed locally as localhost:2121 or accessed from remote computers using the IP address of the Raspberry Pi. The following screenshot shows my 1-Wire bus results using only one connected device (DS1992-family 08): At the top level, the device entries are cached. They will remain visible for at least a minute after you remove them. If you look instead at the uncached entries, they reflect instantaneous arrival and removal device events. You can use the web page to reconfigure all the timeouts and cache values, and the OWFS home page provides the details. Program access to the 1-Wire bus You can programmatically query and write to devices on the 1-Wire bus using the following two methods (there are of course other ways of doing this). Both these methods indirectly read and write using the owserver process: You can use command-line scripts (Bash) to read and write to 1-Wire devices. The following steps show you to get program access to the 1-Wire bus: From the command-line, install the shell support using the following command: sudo apt-get install –y ow-shell The command-line utilities are owget, owdir, owread, and owwrite. While in a multi-section 1-Wire Microlan, you need to specify the bus number, in our simple case with only one 1-Wire Microlan, you can type owget or owdir at the command line to read the device IDs, for example, my Microlan returned: Notice that the structure of the 1-Wire devices is identical to that exposed on the web page, so with the shell utilities, you can write Bash scripts to read and write device parameters. You can use Python to read and write to the 1-Wire devices. Install the Python OWFS module with the following command: sudo apt-get install –y python-ow Open the Python 2 IDLE environment from the Menu, and perform the following steps: In Python Shell, open a new editor window by navigating to File | New Window. In the Editor window, enter the following program: #! /usr/bin/python import ow import time ow.init('localhost:4304') while True:    mysensors = ow.Sensor("/uncached").sensorList( )    for sensor in mysensors[:]:        thisID = sensor.address[2:12]        print sensor.type, "ID = ", thisID    time.sleep(0.5) Save the program as testow.py. You can run this program from the IDLE environment, and it will print out the IDs of all the devices on the 1-Wire Microlan every half second. And if you need help on the python-pw package, then type import ow in the Shell window followed by help(ow) to print the help file. Summary We’ve covered just enough here to get you started with 1-Wire devices for the Raspberry Pi. You can read up on the types of devices available and their potential uses at the web links provided in this article. While the iButton products are obviously great for identity-based projects, such as door openers and access control, there are 1-Wire devices that provide digital I/O and even analog-to-digital conversion. These can be very useful when designing remote acquisition and control interfaces for your Raspberry Pi. Resources for Article: Further resources on this subject: Develop a Digital Clock [article] Raspberry Pi Gaming Operating Systems [article] Penetration Testing [article]
Read more
  • 0
  • 0
  • 35676

article-image-working-data-forms
Packt
28 Apr 2015
13 min read
Save for later

Working with Data in Forms

Packt
28 Apr 2015
13 min read
In this article by Mindaugas Pocius, the author of Microsoft Dynamics AX 2012 R3 Development Cookbook, explains about data organization in the forms. We will cover the following recipes: Using a number sequence handler Creating a custom filter control Creating a custom instant search filter (For more resources related to this topic, see here.) Using a number sequence handler Number sequences are widely used throughout the system as a part of the standard application. Dynamics AX also provides a special number sequence handler class to be used in forms. It is called NumberSeqFormHandler, and its purpose is to simplify the usage of record numbering on the user interface. Some of the standard Dynamics AX forms, such as Customers or Vendors, already have this feature implemented. This recipe shows you how to use the number sequence handler class. Although in this demonstration we will use an existing form, the same approach will be applied when creating brand-new forms. For demonstration purposes, we will use the existing Customer groups form located in Accounts receivable | Setup | Customers and change the Customer group field from manual to automatic numbering. How to do it... Carry out the following steps in order to complete this recipe: In the AOT, open the CustGroup form and add the following code snippet to its class declaration: NumberSeqFormHandler numberSeqFormHandler; Also, create a new method called numberSeqFormHandler() in the same form: NumberSeqFormHandler numberSeqFormHandler() {    if (!numberSeqFormHandler)    {        numberSeqFormHandler = NumberSeqFormHandler::newForm(            CustParameters::numRefCustGroupId().NumberSequenceId,            element,            CustGroup_ds,            fieldNum(CustGroup,CustGroup));    }    return numberSeqFormHandler; } In the same form, override the CustGroup data source's create() method with the following code snippet: void create(boolean _append = false) {    element.numberSeqFormHandler(        ).formMethodDataSourceCreatePre();       super(_append);      element.numberSeqFormHandler(        ).formMethodDataSourceCreate(); } Then, override its delete() method with the following code snippet: void delete() {    ttsBegin;      element.numberSeqFormHandler().formMethodDataSourceDelete();      super();      ttsCommit; } Then, override the data source's write() method with the following code snippet: void write() {    ttsBegin;      super();      element.numberSeqFormHandler().formMethodDataSourceWrite();      ttsCommit; } Similarly, override its validateWrite() method with the following code snippet: boolean validateWrite() {    boolean ret;      ret = super();      ret = element.numberSeqFormHandler(        ).formMethodDataSourceValidateWrite(ret) && ret;      return ret; } In the same data source, override its linkActive() method with the following code snippet: void linkActive() {    element.numberSeqFormHandler(        ).formMethodDataSourceLinkActive();      super(); } Finally, override the form's close() method with the following code snippet: void close() {    if (numberSeqFormHandler)    {        numberSeqFormHandler.formMethodClose();    }      super(); } In order to test the numbering, navigate to Accounts receivable | Setup | Customers | Customer groups and try to create several new records—the Customer group value will be generated automatically: How it works... First, we declare an object of type NumberSeqFormHandler in the form's class declaration. Then, we create a new corresponding form method called numberSeqFormHandler(), which instantiates the object if it is not instantiated yet and returns it. This method allows us to hold the handler creation code in one place and reuse it many times within the form. In this method, we use the newForm() constructor of the NumberSeqFormHandler class to create the numberSeqFormHandler object. It accepts the following arguments: The number sequence code ensures a proper format of the customer group numbering. Here, we call the numRefCustGroupId() helper method from the CustParameters table to find which number sequence code will be used when creating a new customer group record. The FormRun object, which represents the form itself. The form data source, where we need to apply the number sequence handler. The field ID into which the number sequence will be populated. Finally, we add the various NumberSeqFormHandler methods to the corresponding methods on the form's data source to ensure proper handling of the numbering when various events are triggered. Creating a custom filter control Filtering in forms in Dynamics AX is implemented in a variety of ways. As a part of the standard application, Dynamics AX provides various filtering options, such as Filter By Selection, Filter By Grid, or Advanced Filter/Sort that allows you to modify the underlying query of the currently displayed form. In addition to the standard filters, the Dynamics AX list pages normally allow quick filtering on most commonly used fields. Besides that, some of the existing forms have even more advanced filtering options, which allow users to quickly define complex search criteria. Although the latter option needs additional programming, it is more user-friendly than standard filtering and is a very common request in most of the Dynamics AX implementations. In this recipe, we will learn how to add custom filters to a form. We will use the Main accounts form as a basis and add a few custom filters, which will allow users to search for accounts based on their name and type. How to do it... Carry out the following steps in order to complete this recipe: In the AOT, locate the MainAccountListPage form and change the following property for its Filter group: Property Value Columns 2 In the same group, add a new StringEdit control with the following properties: Property Value Name FilterName AutoDeclaration Yes ExtendedDataType AccountName Add a new ComboBox control to the same group with the following properties: Property Value Name FilterType AutoDeclaration Yes EnumType DimensionLedgerAccountType Selection 10 Override the modified() methods for both the newly created controls with the following code snippet: boolean modified() {    boolean ret;      ret = super();      if (ret)    {        MainAccount_ds.executeQuery();    }      return ret; } After all modifications, in the AOT, the MainAccountListPage form will look similar to the following screenshot: In the same form, update the executeQuery() method of the MainAccount data source as follows: public void executeQuery() {    QueryBuildRange qbrName;    QueryBuildRange qbrType;      MainAccount::updateBalances();      qbrName = SysQuery::findOrCreateRange(        MainAccount_q.dataSourceTable(tableNum(MainAccount)),        fieldNum(MainAccount,Name));      qbrType = SysQuery::findOrCreateRange(        MainAccount_q.dataSourceTable(tableNum(MainAccount)),        fieldNum(MainAccount,Type));      if (FilterName.text())    {        qbrName.value(SysQuery::valueLike(queryValue(            FilterName.text())));    }    else  {        qbrName.value(SysQuery::valueUnlimited());    }      if (FilterType.selection() ==        DimensionLedgerAccountType::Blank)    {        qbrType.value(SysQuery::valueUnlimited());    }    else    {        qbrType.value(queryValue(FilterType.selection()));    }      super(); } In order to test the filters, navigate to General ledger | Common | Main accounts and change the values in the newly created filters—the account list will change reflecting the selected criteria: Click on the Advanced Filter/Sort button in the toolbar to inspect how the criteria was applied in the underlying query (note that although changing the filter values here will affect the search results, the earlier created filter controls will not reflect those changes): How it works... We start by changing the Columns property of the existing empty Filter group control to make sure all our controls are placed from the left to the right in one line. We add two new controls that represent the Account name and Main account type filters and enable them to be automatically declared for later usage in the code. We also override their modified() event methods to ensure that the MainAccount data source's query is re-executed whenever the controls' value change. All the code is placed in the executeQuery() method of the form's data source. The code has to be placed before super() to make sure the query is modified before fetching the data. Here, we declare and create two new QueryBuildRange objects, which represent the ranges on the query. We use the findOrCreateRange() method of the SysQuery application class to get the range object. This method is very useful and important, as it allows you to reuse previously created ranges. Next, we set the ranges' values. If the filter controls are blank, we use the valueUnlimited() method of the SysQuery application class to clear the ranges. If the user types some text into the filter controls, we pass those values to the query ranges. The global queryValue() function—which is actually a shortcut to SysQuery::value()—ensures that only safe characters are passed to the range. The SysQuery::valueLike() method adds the * character around the account name value to make sure that the search is done based on partial text. Note that the SysQuery helper class is very useful when working with queries, as it does all kinds of input data conversions to make sure they can be safely used. Here is a brief summary of few other useful methods in the SysQuery class: valueUnlimited(): This method returns a string representing an unlimited query range value, that is, no range at all. value(): This method converts an argument into a safe string. The global queryValue() method is a shortcut for this. valueNot(): This method converts an argument into a safe string and adds an inversion sign in front of it. Creating a custom instant search filter The standard form filters and majority of customized form filters in Dynamics AX are only applied once the user presses some button or key. It is acceptable in most cases, especially if multiple criteria are used. However, when the result retrieval speed and usage simplicity has priority over system performance, it is possible to set up the search so the record list is updated instantly when the user starts typing. In this recipe, to demonstrate the instant search, we will modify the Main accounts form. We will add a custom Account name filter, which will update the account list automatically when the user starts typing. How to do it... Carry out the following steps in order to complete this recipe: In the AOT, open the MainAccountListPage form and add a new StringEdit control with the following properties to the existing Filter group: Property Value Name FilterName AutoDeclaration Yes ExtendedDataType AccountName Override the control's textChange() method with the following code snippet: void textChange() {    super();      MainAccount_ds.executeQuery(); } On the same control, override the control's enter() method with the following code snippet: void enter() {    super();    this.setSelection(        strLen(this.text()),        strLen(this.text())); } Update the executeQuery() method of the MainAccount data source as follows: public void executeQuery() {    QueryBuildRange qbrName;      MainAccount::updateBalances();      qbrName = SysQuery::findOrCreateRange(        this.queryBuildDataSource(),        fieldNum(MainAccount,Name));      qbrName.value(        FilterName.text() ?        SysQuery::valueLike(queryValue(FilterName.text())) :        SysQuery::valueUnlimited());      super(); } In order to test the search, navigate to General ledger | Common | Main accounts and start typing into the Account name filter. Note how the account list is being filtered automatically: How it works... Firstly, we add a new control, which represents the Account name filter. Normally, the user's typing triggers the textChange() event method on the active control every time a character is entered. So, we override this method and add the code to re-execute the form's query whenever a new character is typed in. Next, we have to correct the cursor's behavior. Currently, once the user types in the first character, the search is executed and the system moves the focus out of this control and then moves back into the control selecting all the typed text. If the user continues typing, the existing text will be overwritten with the new character and the loop will continue. In order to get around this, we have to override the control's enter() event method. This method is called every time the control receives a focus whether it was done by a user's mouse, key, or by the system. Here, we call the setSelection() method. Normally, the purpose of this method is to mark a control's text or a part of it as selected. Its first argument specifies the beginning of the selection and the second one specifies the end. In this recipe, we are using this method in a slightly different way. We pass the length of the typed text as a first argument, which means the selection starts at the end of the text. We pass the same value as a second argument, which means that selection ends at the end of the text. It does not make any sense from the selection point of view, but it ensures that the cursor always stays at the end of the typed text allowing the user to continue typing. The last thing to do is to add some code to the executeQuery() method to change the query before it is executed. Modifying the query was discussed in detail in the Creating a custom filter control recipe. The only thing to note here is that we use the SysQuery::valueLike() helper method which adds * to the beginning and the end of the search string to make the search by a partial string. Note that the system's performance might be affected as the data search is executed every time the user types in a character. It is not recommended to use this approach for large tables. See also The Creating a custom filter control recipe Summary In this article, we learned how to add custom filters to forms to allow users to filter data and create record lists for quick data manipulation. We also learned how to build filter controls on forms and how to create custom instant search filters. Resources for Article: Further resources on this subject: Reporting Based on SSRS [article] Installing and Setting up Sure Step [article] Exploring Financial Reporting and Analysis [article]
Read more
  • 0
  • 0
  • 5098
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-random-insults
Packt
28 Apr 2015
21 min read
Save for later

Creating Random Insults

Packt
28 Apr 2015
21 min read
In this article by Daniel Bates, the author of Raspberry Pi for Kids - Second edition, we're going to learn and use the Python programming language to generate random funny phrases such as, Alice has a smelly foot! (For more resources related to this topic, see here.) Python In this article, we are going to use the Python programming language. Almost all programming languages are capable of doing the same things, but they are usually designed with different specializations. Some languages are designed to perform one job particularly well, some are designed to run code as fast as possible, and some are designed to be easy to learn. Scratch was designed to develop animations and games, and to be easy to read and learn, but it can be difficult to manage large programs. Python is designed to be a good general-purpose language. It is easy to read and can run code much faster than Scratch. Python is a text-based language. Using it, we type the code rather than arrange building blocks. This makes it easier to go back and change the pieces of code that we have already written, and it allows us to write complex pieces of code more quickly. It does mean that we need to type our programs accurately, though—there are no limits to what we can type, but not all text will form a valid program. Even a simple spelling mistake can result in errors. Lots of tutorials and information about the available features are provided online at http://docs.python.org/2/. Learn Python the Hard Way, by Shaw Zed A., is another good learning resource, which is available at http://learnpythonthehardway.org. As an example, let's take a look at some Scratch and Python code, respectively, both of which do the same thing. Here's the Scratch code: The Python code that does the same job looks like: def count(maximum):    value = 0    while value < maximum:        value = value + 1        print "value =", value   count(5) Even if you've never seen any Python code before, you might be able to read it and tell what it does. Both the Scratch and Python code count from 0 to a maximum value, and display the value each time. The biggest difference is in the first line. Instead of waiting for a message, we define (or create) a function, and instead of sending a message, we call the function (more on how to run Python code, shortly). Notice that we include maximum as an argument to the count function. This tells Python the particular value we would like to keep as the maximum, so we can use the same code with different maximum values. The other main differences are that we have while instead of forever if, and we have print instead of say. These are just different ways of writing the same thing. Also, instead of having a block of code wrap around other blocks, we simply put an extra four spaces at the beginning of a line to show which code is contained within a particular block. Python programming To run a piece of Python code, open Python 2 from the Programming menu on the Raspberry Pi desktop and perform the following steps: Type the previous code into the window and you should notice that it can recognize how many spaces to start a line with. When you have finished the function block, press Enter a couple of times, until you see >>>. This shows that Python recognizes that your block of code has been completed, and that it is ready to receive a new command. Now, you can run your code by typing in count(5) and pressing Enter. You can change 5 to any number you like and press Enter again to count to a different number. We're now ready to create our program! The Raspberry Pi also supports Python 3, which is very similar but incompatible with Python 2. You can check out the differences between Python 2 and Python 3 at http://python-future.org/compatible_idioms.html. The program we're going to use to generate phrases As mentioned earlier, our program is going to generate random, possibly funny, phrases for us. To do this, we're going to give each phrase a common structure, and randomize the word that appears in each position. Each phrase will look like: <name> has a <adjective> <noun> Where <name> is replaced by a person's name, <adjective> is replaced by a descriptive word, and <noun> is replaced by the name of an object. This program is going to be a little larger than our previous code example, so we're going to want to save it and modify it easily. Navigate to File | New Window in Python 2. A second window will appear which starts off completely blank. We will write our code in this window, and when we run it, the results will appear in the first window. For the rest of the article, I will call the first window the Shell, and the new window the Code Editor. Remember to save your code regularly! Lists We're going to use a few different lists in our program. Lists are an important part of Python, and allow us to group together similar things. In our program, we want to have separate lists for all the possible names, adjectives, and nouns that can be used in our sentences. We can create a list in this manner: names = ["Alice", "Bob", "Carol"] Here, we have created a variable called names, which is a list. The list holds three items or elements: Alice, Bob, and Carol. We know that it is a list because the elements are surrounded by square brackets, and are separated by commas. The names need to be in quote marks to show that they are text, and not the names of variables elsewhere in the program. To access the elements in a list, we use the number which matches its position, but curiously, we start counting from zero. This is because if we know where the start of the list is stored, we know that its first element is stored at position start + 0, the second element is at position start + 1, and so on. So, Alice is at position 0 in the list, Bob is at position 1, and Carol is at position 2. We use the following code to display the first element (Alice) on the screen: print names[0] We've seen print before: it displays text on the screen. The rest of the code is the name of our list (names), and the position of the element in the list that we want surrounded by square brackets. Type these two lines of code into the Code Editor, and then navigate to Run | Run Module (or press F5). You should see Alice appear in the Shell. Feel free to play around with the names in the list or the position that is being accessed until you are comfortable with how lists work. You will need to rerun the code after each change. What happens if you choose a position that doesn't match any element in the list, such as 10? Adding randomness So far, we have complete control over which name is displayed. Let's now work on displaying a random name each time we run the program. Update your code in the Code Editor so it looks like: import random names = ["Alice", "Bob", "Carol"] position = random.randrange(3) print names[position] In the first line of the code, we import the random module. Python comes with a huge amount of code that other people have written for us, separated into different modules. Some of this code is simple, but makes life more convenient for us, and some of it is complex, allowing us to reuse other people's solutions for the challenges we face and concentrate on exactly what we want to do. In this case, we are making use of a collection of functions that deal with random behavior. We must import a module before we are able to access its contents. Information on the available modules available can be found online at www.python.org/doc/. After we've created the list of names, we then compute a random position in the list to access. The name random.randrange tells us that we are using a function called randrange, which can be found inside the random module that we imported earlier. The randrange function gives us a random whole number less than the number we provide. In this case, we provide 3 because the list has three elements and we store the random position in a new variable called position. Finally, instead of accessing a fixed element in the names list, we access the element that position refers to. If you run this code a few times, you should notice that different names are chosen randomly. Now, what happens if we want to add a fourth name, Dave, to our list? We need to update the list itself, but we also need to update the value we provide to randrange to let it know that it can give us larger numbers. Making multiple changes just to add one name can cause problems—if the program is much larger, we may forget which parts of the code need to be updated. Luckily, Python has a nice feature which allows us to make this simpler. Instead of a fixed number (such as 3), we can ask Python for the length of a list, and provide that to the randrange function. Then, whenever we update the list, Python knows exactly how long it is, and can generate suitable random numbers. Here is the code, which is updated to make it easier to change the length of the list: import random names = ["Alice", "Bob", "Carol"] length = len(names) position = random.randrange(length) print names[position] Here, we've created a new variable called length to hold the length of the list. We then use the len function (which is short for length) to compute the length of our list, and we give length to the randrange function. If you run this code, you should see that it works exactly as it did before, and it easily copes if you add or remove elements from the list. It turns out that this is such a common thing to do, that the writers of the random module have provided a function which does the same job. We can use this to simplify our code: import random names = ["Alice", "Bob", "Carol", "Dave"] print random.choice(names) As you can see, we no longer need to compute the length of the list or a random position in it: random.choice does all of this for us, and simply gives us a random element of any list we provide it with. As we will see in the next section, this is useful since we can reuse random.choice for all the different lists we want to include in our program. If you run this program, you will see that it works the same as it did before, despite being much shorter. Creating phrases Now that we can get a random element from a list, we've crossed the halfway mark to generating random sentences! Create two more lists in your program, one called adjectives, and the other called nouns. Put as many descriptive words as you like into the first one, and a selection of objects into the second. Here are the three lists I now have in my program: names = ["Alice", "Bob", "Carol", "Dave"] adjectives = ["fast", "slow", "pretty", "smelly"] nouns = ["dog", "car", "face", "foot"] Also, instead of printing our random elements immediately, let's store them in variables so that we can put them all together at the end. Remove the existing line of code with print in it, and add the following three lines after the lists have been created: name = random.choice(names) adjective = random.choice(adjectives) noun = random.choice(nouns) Now, we just need to put everything together to create a sentence. Add this line of code right at the end of the program: print name, "has a", adjective, noun Here, we've used commas to separate all of the things we want to display. The name, adjective, and noun are our variables holding the random elements of each of the lists, and "has a" is some extra text that completes the sentence. print will automatically put a space between each thing it displays (and start a new line at the end). If you ever want to prevent Python from adding a space between two items, separate them with + rather than a comma. That's it! If you run the program, you should see random phrases being displayed each time, such as Alice has a smelly foot or Carol has a fast car. Making mischief So, we have random phrases being displayed, but what if we now want to make them less random? What if you want to show your program to a friend, but make sure that it only ever says nice things about you, or bad things about them? In this section, we'll extend the program to do just that. Dictionaries The first thing we're going to do is replace one of our lists with a dictionary. A dictionary in Python uses one piece of information (a number, some text, or almost anything else) to search for another. This is a lot like the dictionaries you might be used to, where you use a word to search for its meaning. In Python, we say that we use a key to look for a value. We're going to turn our adjectives list into a dictionary. The keys will be the existing descriptive words, and the values will be tags that tell us what sort of descriptive words they are. Each adjective will be "good" or "bad". My adjectives list becomes the following dictionary. Make similar changes to yours. adjectives = {"fast":"good", "slow":"bad", "pretty":"good", "smelly":"bad"} As you can see, the square brackets from the list become curly braces when you create a dictionary. The elements are still separated by commas, but now each element is a key-value pair with the adjective first, then a colon, and then the type of adjective it is. To access a value in a dictionary, we no longer use the number which matches its position. Instead, we use the key with which it is paired. So, as an example, the following code will display "good" because "pretty" is paired with "good" in the adjectives dictionary: print adjectives["pretty"] If you try to run your program now, you'll get an error which mentions random.choice(adjectives). This is because random.choice expects to be given a list, but is now being given a dictionary. To get the code working as it was before, replace that line of code with this: adjective = random.choice(adjectives.keys()) The addition of .keys() means that we only look at the keys in the dictionary—these are the adjectives we were using before, so the code should work as it did previously. Test it out now to make sure. Loops You may remember the forever and repeat code blocks in Scratch. In this section, we're going to use Python's versions of these to repeatedly choose random items from our dictionary until we find one which is tagged as "good". A loop is the general programming term for this repetition—if you walk around a loop, you will repeat the same path over and over again, and it is the same with loops in programming languages. Here is some code, which finds an adjective and is tagged as "good". Replace your existing adjective = line of code with these lines: while True:    adjective = random.choice(adjectives.keys())    if adjectives[adjective] == "good":        break The first line creates our loop. It contains the while key word, and a test to see whether the code should be executed inside the loop. In this case, we make the test True, so it always passes, and we always execute the code inside. We end the line with a colon to show that this is the beginning of a block of code. While in Scratch we could drag code blocks inside of the forever or repeat blocks, in Python we need to show which code is inside the block in a different way. First, we put a colon at the end of the line, and then we indent any code which we want to repeat by four spaces. The second line is the code we had before: we choose a random adjective from our dictionary. The third line uses adjectives[adjective] to look into the (adjectives) dictionary for the tag of our chosen adjective. We compare the tag with "good" using the double = sign (a double = is needed to make the comparison different from the single = case, which stores a value in a variable). Finally, if the tag matches "good",we enter another block of code: we put a colon at the end of the line, and the following code is indented by another four spaces. This behaves the same way as the Scratch if block. The fourth line contains a single word: break. This is used to escape from loops, which is what we want to do now that we have found a "good" adjective. If you run your code a few times now, you should see that none of the bad adjectives ever appear. Conditionals In the preceding section, we saw a simple use of the if statement to control when some code was executed. Now, we're going to do something a little more complex. Let's say we want to give Alice a good adjective, but give Bob a bad adjective. For everyone else, we don't mind if their adjective is good or bad. The code we already have to choose an adjective is perfect for Alice: we always want a good adjective. We just need to make sure that it only runs if our random phrase generator has chosen Alice as its random person. To do this, we need to put all the code for choosing an adjective within another if statement, as shown here: if name == "Alice":    while True:        adjective = random.choice(adjectives.keys())        if adjectives[adjective] == "good":            break Remember to indent everything inside the if statement by an extra four spaces. Next, we want a very similar piece of code for Bob, but also want to make sure that the adjective is bad: elif name == "Bob":    while True:        adjective = random.choice(adjectives.keys())        if adjectives[adjective] == "bad":           break The only differences between this and Alice's code is that the name has changed to "Bob", the target tag has changed to "bad", and if has changed to elif. The word elif in the code is short for else if. We use this version because we only want to do this test if the first test (with Alice) fails. This makes a lot of sense if we look at the code as a whole: if our random person is Alice, do something, else if our random person is Bob, do something else. Finally, we want some code that can deal with everyone else. This time, we don't want to perform another test, so we don't need an if statement: we can just use else: else:    adjective = random.choice(adjectives.keys()) With this, our program does everything we wanted it to do. It generates random phrases, and we can even customize what sort of phrase each person gets. You can add as many extra elif blocks to your program as you like, so as to customize it for different people. Functions In this section, we're not going to change the behavior of our program at all; we're just going to tidy it up a bit. You may have noticed that when customizing the types of adjectives for different people, you created multiple sections of code, which were almost identical. This isn't a very good way of programming because if we ever want to change the way we choose adjectives, we will have to do it multiple times, and this makes it much easier to make mistakes or forget to make a change somewhere. What we want is a single piece of code, which does the job we want it to do, and then be able to use it multiple times. We call this piece of code a function. We saw an example of a function being created in the comparison with Scratch at the beginning of this article, and we've used a few functions from the random module already. A function can take some inputs (called arguments) and does some computation with them to produce a result, which it returns. Here is a function which chooses an adjective for us with a given tag: def chooseAdjective(tag):    while True:        item = random.choice(adjectives.keys())        if adjectives[item] == tag:            break    return item In the first line, we use def to say that we are defining a new function. We also give the function's name and the names of its arguments in brackets. We separate the arguments by commas if there is more than one of them. At the end of the line, we have a colon to show that we are entering a new code block, and the rest of the code in the function is indented by four spaces. The next four lines should look very familiar to you—they are almost identical to the code we had before. The only difference is that instead of comparing with "good" or "bad", we compare with the tag argument. When we use this function, we will set tag to an appropriate value. The final line returns the suitable adjective we've found. Pay attention to its indentation. The line of code is inside the function, but not inside the while loop (we don't want to return every item we check), so it is only indented by four spaces in total. Type the code for this function anywhere above the existing code, which chooses the adjective; the function needs to exist in the code prior to the place where we use it. In particular, in Python, we tend to place our code in the following order: Imports Functions Variables Rest of the code This allows us to use our functions when creating the variables. So, place your function just after the import statement, but before the lists. We can now use this function instead of the several lines of code that we were using before. The code I'm going to use to choose the adjective now becomes: if name == "Alice":    adjective = chooseAdjective("good") elif name == "Bob":    adjective = chooseAdjective("bad") else:    adjective = random.choice(adjectives.keys()) This looks much neater! Now, if we ever want to change how an adjective is chosen, we just need to change the chooseAdjective function, and the change will be seen in every part of the code where the function is used. Complete code listing Here is the final code you should have when you have completed this article. You can use this code listing to check that you have everything in the right order, or look for other problems in your code. Of course, you are free to change the contents of the lists and dictionaries to whatever you like; this is only an example: import random   def chooseAdjective(tag):    while True:        item = random.choice(adjectives.keys())        if adjectives[item] == tag:            break    return item   names = ["Alice", "Bob", "Carol", "Dave"] adjectives = {"fast":"good", "slow":"bad", "pretty":"good", "smelly":"bad"} nouns = ["dog", "car", "face", "foot"]   name = random.choice(names) #adjective = random.choice(adjectives) noun = random.choice(nouns)   if name == "Alice":    adjective = chooseAdjective("good") elif name == "Bob":    adjective = chooseAdjective("bad") else:    adjective = random.choice(adjectives.keys())   print name, "has a", adjective, noun Summary In this article, we learned about the Python programming language and how it can be used to create random phrases. We saw that it shared lots of features with Scratch, but is simply presented differently. Resources for Article: Further resources on this subject: Develop a Digital Clock [article] GPS-enabled Time-lapse Recorder [article] The Raspberry Pi and Raspbian [article]
Read more
  • 0
  • 0
  • 13476

article-image-administration-configuration-manager-through-powershell
Packt
28 Apr 2015
15 min read
Save for later

Administration of Configuration Manager through PowerShell

Packt
28 Apr 2015
15 min read
When we are required to automate a few activities in Configuration Manager, we need to use any of the scripting languages, such as VB or PowerShell. PowerShell has its own advantages over other scripting languages. In this article by Guruprasad HP, coauthor of the book Microsoft System Center Powershell Essentials we will cover: The introduction of Configuration Manager through PowerShell Hierarchy details Asset and compliance (For more resources related to this topic, see here.) Introducing Configuration Manager through PowerShell The main intention of this article is to give you a brief idea of how to use PowerShell with Configuration Manager and not to make you an expert with all the cmdlets. With the goal of introducing Configuration Manager admins to PowerShell, this article mainly covers how to use PowerShell cmdlets to get the information about Configuration Manager configurations and how to create our own custom configurations using PowerShell. Just like you cannot get complete information of any person during the first meet, you cannot expect everything in this article. This article starts with an assumption that we have a well-built Configuration Manager environment. To start with, let's first understand how to fetch details from Configuration Manager. After that, we will create our own custom configurations. To stick on to convention, we will first learn how to fetch configuration details from Configuration Manager followed by a demonstration of how to create our own custom configurations using PowerShell. PowerShell provides around 560 different cmdlets to administrate and manage Configuration Manager. You can verify the cmdlets counts for Configuration Manager by using the count operation with the Get-Command cmdlet with ConfigurationManager as the module parameter: (Get-Command –Module ConfigurationManager).Count It is always a good idea to export all the cmdlets to an external file that you can use as a reference at any point of time. You can export the cmdlets by using Out-File with the Get-Command cmdlet: Get-Command –Module ConfigurationManager | Out-File "D:SCCMPowerShellCmdlets.txt" Once we have the Configuration Manager infrastructure ready, we can start validating the configurations through the PowerShell console. Here are the quick cmdlets that help to verify the Configuration Manager configurations followed by cmdlets to create custom configurations. Since PowerShell follows a verb-noun sequence, we can easily identify the cmdlets that help to check configurations as they start with Get. Similarly, cmdlets to create new configurations will typically start with New, Start, or set. We can always refer to the Microsoft TechNet page at http://technet.microsoft.com/en-us/library/jj821831(v=sc.20).aspx for the latest list of all the available cmdlets. Before proceeding further, we have to set the execution location from the current drive to System Center Configuration Manager (SCCM) to avail the benefit of using PowerShell for the administration of SCCM. To connect, we can use the Set-Location cmdlet with the site code as the parameter or we can open PowerShell from the Configuration Manager console. Assuming we have P01 as the site code, we can connect to Configuration Manager using PowerShell by executing the following command: Set-Location P01: Hierarchy details This section will concentrate on how to get the Configuration Manager site details and how to craft our own custom hierarchy configurations using PowerShell cmdlets. This involves knowing and configuring the site details, user and device discovery, boundary configurations, and installation of various site roles. Site details First and foremost, get to know the Configuration Manager architecture details. You can use the Get-CMSite cmdlet to know the details of the Configuration Manager site. This cmdlet without any parameters will give the details of the site installed locally. To get the details of the remote site, you are required to give the site name or the site code of the remote site: Get-CMSite Get-CMSIte –SiteName "India Site" Get-CMSite –SiteCode P01 Discovery details It is important to get the discovery details before proceeding, as it decides the computer and the users that Configuration Manager will manage. PowerShell provides the Get-CMDiscoveryMethod cmdlet to get complete details of the discovery information. You can pass the discovery method as a parameter to the cmdlet to get the complete details of that discovery method. Additionally, you can also specify the site code as a parameter to the cmdlet to constrain the output of that particular site. In the following example, we are trying to get the information of HeartBeatDiscovery and we are restricting our search to the P01 site: Get-CMDiscoveryMethod –Name HeartBeatDiscovery –SiteCode P01 We can also pass other discovery methods as parameters to this cmdlet. Instead of HeartBeatDiscovery, you can use any of the following methods: ActiveDirectoryForestDiscovery ActiveDirectoryGroupDiscovery ActiveDirectorySystemDiscovery ActiveDirectoryUserDiscovery NetworkDiscovery Boundary details One of the first and most important and things to be configured in Configuration Manager are the boundary settings. Once the discovery is enabled, we are required to create a boundary and link it with the boundary group to manage clients through Configuration Manager. PowerShell provides inbuilt cmdlets to get information of the configured boundaries and boundary groups. We also have the cmdlets to create and configure new boundaries. You can use Get-CMBoundary to fetch the details of boundaries configured in Configuration Manager. PowerShell will also leverage you to use the Format-List attribute with the * (asterisk) wild character as the parameter value to get the detailed information of each boundary. As default, this cmdlet will return and give you the available boundaries configured in Configuration Manager. This cmdlet will also accept parameters, such as the boundary name, which will give the information of a specified boundary. You can even specify the boundary group name as the parameter, which will return the boundary specified by the associated boundary group. You can also specify the boundary ID as a parameter for this cmdlet: Get-CMBoundary –Name "Test Boundary" Get-CMBoundary –BoundaryGroup "Test Boundary Group" Get-CMBoundary –ID 12587459 Similarly, we can use Get-CMBoundaryGroup to view the details of all the boundary groups created and configured on the console. Using the cmdlet with no parameters will result in the listing of all the boundary groups available in the console. You can use the boundary group name or ID as a parameter to get the information of the interested boundary group: Get-CMBoundaryGroup Get-CMBoundaryGroup -Name "Test Boundary Group" Get-CMBoundaryGroup –ID "1259843" You can also get the information of multiple boundary groups by supplying the list as a parameter to the cmdlet: Get-CMBoundaryGroup –Name "TestBG1", "TestBG2", "TestBG3", "TestBG4" Until now, we saw how to read boundary and boundary-related details using PowerShell cmdlets. Now, let's see how to create our custom boundary in Configuration Manager using PowerShell cmdlets. Just like you create boundaries in console, PowerShell provides the New-CMBoundary cmdlet to create boundaries using PowerShell. At the minimum, we are required to provide the boundary name, boundary type, and value as a parameter to the cmdlet. We can create boundaries based on different criteria, such as the Active Directory site, IP subnet, IP range, and IPv6 prefix. PowerShell allows us to specify the criteria based on which we want to create a boundary in the boundary type parameter. The following examples show you all four ways to create boundaries. The boundary type to be used is decided based on the architecture and the requirement: New-CMBoundary –DisplayName "IPRange Boundary" –BoundaryType IPRange –Value "192.168.50.1-192.168.50.99" New-CMBoundary –DisplayName "ADSite Boundary" –BoundaryType ADSite –Value "Default-First-Site-Name" New-CMBoundary –DisplayName "IPSubnet Boundary" –BoundaryType IPSubnet –Value "192.168.50.0/24" New-CMBoundary –DisplayName "IPV6 Boundary" –BoundaryType IPv6Prefix –Value "FE80::/64" With the introduction of the boundary group concept with Configuration Manager 2012, it is expected that every boundary created should be made a part of a boundary group before it starts managing the clients. So, we first need to create a boundary group (if not present) and then add the boundary to the boundary group. We can use the New-CMBoundaryGroup cmdlet to create a new Configuration Manager boundary group. At the minimum, we are required to pass the boundary group name as a parameter, but also it is recommended that you pass the boundary description as the parameter: New-CMBoundaryGroup –Name "Test Boundary Group" –Description "Test boundary group created from PowerShell for testing" Upon successful execution, the command will create a boundary group named Test Boundary Group. We will now add our newly created boundary to this newly created boundary group. PowerShell provides an Add-CMBoundaryToGroup cmdlet to add the existing boundary to the existing boundary group: Add-CMBoundaryToGroup –BoundaryName "IPRange Boundary" –BoundaryGroupName "Test Boundary Group" This will add the IPRange Boundary boundary to the Test Boundary Group boundary group. You can use looping to add multiple boundaries to the boundary group in a real-time scenario. We can remove a boundary from Configuration Manger using the Remove-CMBoundary cmdlet. We can just specify the name or ID of the boundary to be deleted as a parameter to the cmdlet: Remove-CMBoundary –Name "Test Boundary" -force Distribution point details The details of the distribution points are one of the most common requirements, and it is essential that the Configuration Manager admin knows the distribution points configured in the environment to plan and execute any deployments. We can do this either using the traditional way of logging in to the console or by using the PowerShell approach. PowerShell provides the Get-CMDistributionPoint cmdlet to get the list of distribution points configured. Distribution points in Configuration Manager are used to store files, such as software packages, update packages, operating system deployment related packages, and so on. If no parameters are specified, this cmdlet will list down all the distribution points available. You can pass the site server system name and site code as parameters to filter the result, which will restrict the results to the specified site: Get-CMDistributionPoint –SiteSystemServerName "SCCMP01.Guru.Com" –SiteCode "P01" Here is a quick look of how to create and manage distribution points in Configuration Manager through PowerShell. We can create and manage the distribution point site system role in Configuration Manager through PowerShell just as we did using the console. To do this, we first need to create a site system server on the site (if not available), which we can later be upgraded as the site distribution point. We can do this using the New-CMSiteSystemServer cmdlet: New-CMSiteSystemServer –sitecode "P01" –UseSiteServerAccount –ServerName "dp.guru.com" This will use the site server account for the creation of the new site system. Next, we will configure this site system as a distribution point. We can do this by using the Add-CMDistrubutionPoint cmdlet: Add-CMDistributionPoint –SiteCode "P01" –SiteSystemServerName "dp.guru.com" –MinimumFreeSpaceMB 500 –CertificateExpirationTimeUtc "2020/12/30" –MinimumFreeSpaceMB 500 This will create dp.guru.com as a distribution point and also reserve 500 MB of space. We can also enable IIS and PXE support for the distribution point. We can also configure DP to respond to the incoming PXE requests with the following parameters. It just needs an extra effort to pass a few more parameters for the Distribution Point creation cmdlet: Add-CMDistributionPoint –SiteCode "P01" –SiteSystemServerName "dp.guru.com" –MinimumFreeSpaceMB 500 –InstallInternetServer –EnablePXESupport –AllowRespondIncomingPXERequest –CertificateExpirationTimeUtc "2020/12/30" We can create the distribution point group (if not already present) for the effective management of distribution point managements available in the environment using the New-CMDistributionPointGroup cmdlet with the minimum distribution point name as the parameter: New-CMDistributionPointGroup –Name "Test Distribution Group" With the distribution point group created, we can add the newly created distribution point to the distribution point group. You can use the Add-CMDistributionPointToGroup cmdlet with the distribution point name and distribution point group name, at the minimum, as parameters: Add-CMDistributionPointToGroup –DistributionPointName "dp.guru.com" –DistributionPointGroupName "Test Distribution Group" We can also add any device collection to the newly created distribution point group so that whenever we deploy items (such as packages, programs, and so on) to the device collection, the content will be auto distributed to the distribution group: Add-CMDeviceCollectionToDistributionPointGroup –DeviceCollectionName "TestCollection1" –DistributionPointGroupName "Test Distribution Group" Management point details The management point provides polices and service location information to the client. It also receives data from clients and processes and stores it in the database. PowerShell provides the Get-CMManagementPoint cmdlet to get the details of the management point. Optionally, you can provide the site system server name and the site code as the parameter to the cmdlet. The following example will fetch the management points associated with the SCCMP01.Guru.Com site system that has the site code P01: Get-CMManagementPoint –SiteSystemServerName "SCCMP01.Guru.Com" –SiteCode "P01" When you install CAS or the primary server using the default settings, the distribution points and management points will be automatically installed. However, if you want to add an additional management point, you can add the role from the server or through the PowerShell console. PowerShell provides the Add-CMManagementPoint cmdlet to add a new management point to the site. At the minimum, we are required to provide the site server name that we designated as the management point, database name, site code, the SQL server name, and the SQL instance name. The following example depicts how to create a management point through PowerShell: Add-CMManagementPoint –SiteSystemServerName "MP1.Guru.Com" –SiteCode "P01" –SQLServerFqDn "SQL.Guru.Com" -SQLServerInstanceName "SCCMP01" –DataBaseName "SCCM" –ClientConnectionType InternetAndIntranet –AllowDevice –GenerateAlert -EnableSsl We can use the Set-CMManagementPoint cmdlet to change any management point settings that are already created. The following example changes the GenerateAlert property to false: Set-CMManagementPoint –SiteSystemServerName "MP1.Guru.Com" –SiteCode "P01" –GenerateAlert:$False Other site role details Like distribution points and management points, we can get the detailed information of all other site roles (if they are installed and configured in the Configuration Manager environment). The following command snippet lists the different cmdlets available and their usage to get the details of different roles: Get-CMApplicationCatalogWebServicePoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Get-CMApplicationCatalogWebsitePoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Get-CMEnrollmentPoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Get-CMEnrollmentProxyPoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Get-CMFallbackStatusPoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Get-CMSystemHealthValidatorPoint –SiteSystemServerName "SCCMP01.guru.com" –SiteCode P01 Asset and compliance This section will mainly concentrates on gathering information and how to get details of devices, users, compliance settings, alerts, and so on. It also demonstrates how to create custom collections, add special configurations to collections, create custom client settings, install client agents, approve agents, and so on. Collection details Getting the collection details from PowerShell is as easy as using the console to get the details. You can use the Get-CMDeviceCollection cmdlet to get the details of the available collection. We can use the basics by using Format-Table with the autosize parameter to get the neat view: Get-CMDeviceCollection | Format-Table –AutoSize We can also use the grid view to get the details popped out as a grid. This will give us a nice grid that we can scroll and sort easily: Get-CMDeviceCollection | Out-GridView We can use Name or CollectionID as the parameter to get the information of a particular collection: Get-CMDeviceCollection –Name "All Windows-7 Devices" Get-CMDeviceCollection –CollectionId"2225000D" You can also specify the distribution point group name as the parameter to get the list of the collection that is associated with the specified distribution point group: Get-CMDeviceCollection –DistributionPointGroupName "Test Distribution Point Group" You can also replace the DistributionPointGroupName parameter with DistributionPointGroupID to pass the distribution point ID as a parameter to the cmdlet. Similarly, you can use the Get-CMUserCollection cmdlet to get the details of the available user collection in SCCM: Get-CMUserCollection | Format-Table –AutoSize It is also possible to read direct members of any existing collection. PowerShell provides cmdlets to read the direct membership of both the device and user collection. We can use Get-CMDeviceCollectionDirectMembershipRule and Get-CMuserCollectionDirectMembershipRule to read the direct members of the device and user collection respectively: CMDeviceCollectionDirectMembershipRule – CollectionName "Test Device Collection" –ResourceID "45647936" Get-CMUserCollectionDirectMembershipRule –CollectionName "Test User Collection" –ResourceID 99845361 Similarly, PowerShell also empowers us to get the query membership rule by using the Get-CMDevicecollectionQueryMembershipRule and Get-CMUsercollectionQueryMembershipRule cmdlets for the device and user collections respectively. The collection name and rule name needs to be specified as parameters to the cmdlet. The following example assumes that there is already a collection named All Windows-7 Machines associated with the Windows-7 Machines rule name and an All Domain Users user collection associated with the Domain Users query rule: Get-CMDeviceCollectionQueryMembershipRule –CollectionName "All Windows-7 Machines" –RuleName "Windows 7 Machines" Get-CMUsercollectionQueryMembershipRule –CollectionName "All Domain Users" –RuleName "Domain Users" Reading Configuration Manager status messages We can get status messages from one or more Configuration Manager site system components. A status message includes information of success, failure, and warning messages of the site system components. We can use the Get-CMSiteStatusMessage cmdlet to get the status messages. At the minimum, we are required to provide the start time to display the messages: Get-CMSiteStatusMessage –ViewingPeriod "2015/02/20 10:12:05" We can also include a few optional parameters that can help us to filter the output according to our requirement. Most importantly, we can use the computer name, message severity, and site code as additional parameters. For Severity, we can use the All, Error, Information, or Warning values: Get-CMSiteStatusMessage –ViewingPeriod "2014/08/17 10:12:05" –ComputerName XP1 –Severity All SiteCode P01 So, now we are clear on how to extract collection information from Configuration Manager using PowerShell. Let's now start creating our own collection using PowerShell. Summary In this article you have learned how to use PowerShell to get the basic details of the Configuration Manager environment. Resources for Article: Further resources on this subject: Managing Microsoft Cloud [article] So, what is PowerShell 3.0 WMI? [article] Unleashing Your Development Skills with PowerShell [article]
Read more
  • 0
  • 0
  • 15141

article-image-writing-simple-behaviors
Packt
27 Apr 2015
18 min read
Save for later

Writing Simple Behaviors

Packt
27 Apr 2015
18 min read
In this article by Richard Sneyd, the author of Stencyl Essentials, we will learn about Stencyl's signature visual programming interface to create logic and interaction in our game. We create this logic using a WYSIWYG (What You See Is What You Get) block snapping interface. By the end of this article, you will have the Player Character whizzing down the screen, in pursuit of a zigzagging air balloon! Some of the things we will learn to do in this article are as follows: Create Actor Behaviors, and attach them to Actor Types. Add Events to our Behaviors. Use If blocks to create branching, conditional logic to handle various states within our game. Accept and react to input from the player. Apply physical forces to Actors in real-time. One of the great things about this visual approach to programming is that it largely removes the unpleasantness of dealing with syntax (the rules of the programming language), and the inevitable errors that come with it, when we're creating logic for our game. That frees us to focus on the things that matter most in our games: smooth, well wrought game mechanics and enjoyable, well crafted game-play. (For more resources related to this topic, see here.) The Player Handler The first behavior we are going to create is the Player Handler. This behavior will be attached to the Player Character (PC), which exists in the form of the Cowboy Actor Type. This behavior will be used to handle much of the game logic, and will process the lion's share of player input. Creating a new Actor Behavior It's time to create our very first behavior! Go to the Dashboard, under the LOGIC heading, select Actor Behaviors: Click on This game contains no Logic. Click here to create one. to add your first behavior. You should see the Create New... window appear: Enter the Name Player Handler, as shown in the previous screenshot, then click Create. You will be taken to the Behavior Designer: Let's take a moment to examine the various areas within the Behavior Designer. From left to right, as demonstrated in the previous screenshot, we have: The Events Pane: Here we can add, remove, and move between events in our Behavior. The Canvas: To the center of the screen, the Canvas is where we drag blocks around to click our game logic together. The blocks Palette: This is where we can find any and all of the various logic blocks that Stencyl has on offer. Simply browse to your category of choice, then click and drag the block onto the Canvas to configure it. Follow these steps: Click the Add Event button, which can be found at the very top of the Events Pane. In the menu that ensues, browse to Basics and click When Updating: You will notice that we now have an Event in our Events Pane, called Updated, along with a block called always on our Canvas. In Stencyl events lingo, always is synonymous with When Updating: Since this is the only event in our Behavior at this time, it will be selected by default. The always block (yellow with a red flag) is where we put the game logic that needs to be checked on a constant basis, for every update of the game loop (this will be commensurate with the framerate at runtime, around 60fps, depending on the game performance and system specs). Before we proceed with the creation of our conditional logic, we must first create a few attributes. If you have a programming background, it is easiest to understand attributes as being synonymous to local variables. Just like variables, they have a set data type, and you can retrieve or change the value of an attribute in real-time. Creating Attributes Switch to the Attributes context in the blocks palette: There are currently no attributes associated with this behavior. Let's add some, as we'll need them to store important information of various types which we'll be using later on to craft the game mechanics. Click on the Create an Attribute button: In the Create an Attribute… window that appears, enter the Name Target Actor, set Type to Actor, check Hidden?, and press OK: Congratulations! If you look at the lower half of the blocks palette, you will see that you have added your first attribute, Target Actor, of type Actors, and it is now available for use in our code. Next, let's add five Boolean attributes. A Boolean is a special kind of attribute that can be set to either true, or false. Those are the only two values it can accept. First, let's create the Can Be Hurt Boolean: Click Create an Attribute…. Enter the Name Can Be Hurt. Change the Type to Boolean. Check Hidden?. Press OK to commit and add the attribute to the behavior. Repeat steps 1 through 5 for the remaining four Boolean attributes to be added, each time substituting the appropriate name:     Can Switch Anim     Draw Lasso     Lasso Back     Mouse Over If you have done this correctly, you should now see six attributes in your attributes list - one under Actor, and five under Boolean - as shown in the following screenshot: Now let's follow the same process to further create seven attributes; only this time, we'll set the Type for all of them to Number. The Name for each one will be: Health (Set to Hidden?). Impact Force (Set to Hidden?). Lasso Distance (Set to Hidden?). Max Health (Don't set to Hidden?). Turn Force (Don't set to Hidden?). X Point (Set to Hidden?). Y Point (Set to Hidden?). If all goes well, you should see your list of attributes update accordingly: We will add just one additional attribute. Click the Create an Attribute… button again: Name it Mouse State. Change its Type to Text. Do not hide this attribute. Click OK to commit and add the attribute to your behavior. Excellent work, at this point, you have created all of the attributes you will need for the Player Handler behavior! Custom events We need to create a few custom events in order to complete the code for this game prototype. For programmers, custom events are like functions that don't accept parameters. You simply trigger them at will to execute a reusable bunch of code. To accept parameters, you must create a custom block: Click Add Event. Browse to Advanced.. Select Custom Event: You will see that a second event, simply called Custom Event, has been added to our list of events: Now, double-click on the Custom Event in the events stack to change its label to Obj Click Check (For readability purposes, this does not affect the event's name in code, and is completely ignored by Stencyl): Now, let's set the name as it will be used in code. Click between When and happens, and insert the name ObjectClickCheck: From now on, whenever we want to call this custom event in our code, we will refer to it as ObjectClickCheck. Go back to the When Updating event by selecting it from the events stack on the left. We are going to add a special block to this event, which calls the custom event we created just a moment ago: In the blocks palette, go to Behaviour | Triggers | For Actor, then click and drag the highlighted block onto the canvas: Drop the selected block inside of the Always block, and fill in the fields as shown (please note that I have deliberately excluded the space between Player and Handler in the behavior name, so as to demonstrate the debugging workflow. This will be corrected in a later part of the article): Now, ObjectClickCheck will be executed for every iteration of the game loop! It is usually a good practice to split up your code like this, rather than having it all in one really long event. That would be confusing, and terribly hard to sift through when behaviors become more complex! Here is a chance to assess what you have learnt from this article thus far. We will create a second custom event; see if you can achieve this goal using only the skeleton guide mentioned next. If you struggle, simply refer back to the detailed steps we followed for the ObjectClickCheck event: Click Add Event | Advanced | Custom Event. A new event will appear at the bottom of the events pane. Double Click on the event in the events pane to change its label to Handle Dir Clicks for readability purposes. Between When and happens, enter the name HandleDirectionClicks. This is the handle we will use to refer to this event in code. Go back to the Updated event, right click on the Trigger event in behavior for self block that is already in the always block, and select copy from the menu. Right-click anywhere on the canvas and select paste from the menu to create an exact duplicate of the block. Change the event being triggered from ObjectClickCheck to HandleDirectionClicks. Keep the value PlayerHandler for the behavior field. Drag and drop the new block so that it sits immediately under the original. Holding Alt on the keyboard, and clicking and dragging on a block, creates a duplicate of that block. Were you successful? If so, you should see these changes and additions in your behavior (note that the order of the events in the events pane does not affect the game logic, or the order in which code is executed). Learning to create and utilize custom events in Stencyl is a huge step towards mastering the tool, so congratulations on having come this far! Testing and debugging As with all fields of programming and software development, it is important to periodically and iteratively test your code. It's much easier to catch and repair mistakes this way. On that note, let's test the code we've written so far, using print blocks. Browse to and select Flow | Debug | print from the blocks palette: Now, drag a copy of this block into both of your custom events, snapping it neatly into the when happens blocks as you do so. For the ObjectClickCheck event, type Object Click Detected into the print block For the HandleDirectionClicks event, type Directional Click Detected into the print block. We are almost ready to test our code. Since this is an Actor Behavior, however, and we have not yet attached it to our Cowboy actor, nothing would happen yet if we ran the code. We must also add an instance of the Cowboy actor to our scene: Click the Attach to Actor Type button to the top right of the blocks palette: Choose the Cowboy Actor from the ensuing list, and click OK to commit. Go back to the Dashboard, and open up the Level 1 scene. In the Palette to the right, switch from Tiles to Actors, and select the Cowboy actor: Ensure Layer 0 is selected (as actors cannot be placed on background layers). Click on the canvas to place an instance of the actor in the scene, then click on the Inspector, and change the x and y Scale of the actor to 0.8: Well done! You've just added your first behavior to an Actor Type, and added your first Actor Instance to a scene! We are now ready to test our code. First, Click the Log Viewer button on the toolbar: This will launch the Log Viewer. The Log Viewer will open up, at which point we need only set Platform to Flash (Player), and click the Test Game Button to compile and execute our code: After a few moments, if you have followed all of the steps correctly, you will see that the game windows opens on the screen and a number of events appear on the Log Viewer. However, none of these events have anything to do with the print blocks we added to our custom events. Hence, something has gone wrong, and must be debugged. What could it be? Well, since the blocks simply are not executing, it's likely a typo of some kind. Let's look at the Player Handler again, and you'll see that within the Updated event, we've referred to the behavior name as PlayerHandler in both trigger event blocks, with no space inserted between the words Player and Handler: Update both of these fields to Player Handler, and be sure to include the space this time, so that it looks like the following (To avoid a recurrence of this error, you may wish to use the dropdown menu by clicking the downwards pointing grey arrow, then selecting Behavior Names to choose your behavior from a comprehensive list): Great work! You have successfully completed your first bit of debugging in Stencyl. Click the Test Game button again. After the game window has opened, if you scroll down to the bottom of the Log Viewer, you should see the following events piling up: These INFO events are being triggered by the print blocks we inserted into our custom events, and prove that our code is now working. Excellent job! Let's move on to a new Actor; prepare to meet Dastardly Dan! Adding the Balloon Let's add the balloon actor to our game, and insert it into Level 1: Go to the Dashboard, and select Actor Types from the RESOURCES menu. Press Click here to create a new Actor Type. Name it Balloon, and click Create. Click on This Actor Type contains no animations. Click here to add an animation. Change the text in the Name field to Default. Un-check looping?. Press the Click here to add a frame. button. The Import Frame from Image Strip window appears. Change the Scale to 4x. Click Choose Image... then browse to Game AssetsGraphicsActor Animations and select Balloon.png. Keep Columns and Rows set to 1, and click Add to commit this frame to the animation. All animations are created with a box collision shape by default. In actuality, the Balloon actor requires no collisions at all, so let's remove it. Go to the Collision context, select the Default box, and press Delete on the keyboard: The Balloon Actor Type is now free of collision shapes, and hence will not interact physically with other elements of our game levels. Next, switch to the Physics context: Set the following attributes: Set What Kind of Actor Type? to Normal. Set Can Rotate? To No. This will disable all rotational physical forces and interactions. We can still rotate the actor by setting its rotation directly in the code, however. Set Affected by Gravity? to No. We will be handling the downward trajectory of this actor ourselves, without using the gravity implemented by the physics engine. Just before we add this new actor to Level 1, let's add a behavior or two. Switch to the Behaviors context: Then, follow these steps: This Actor Type currently has no attached behaviors. Click Add Behavior, at the bottom left hand corner of the screen: Under FROM YOUR LIBRARY, go to the Motion category, and select Always Simulate. The Always Simulate behavior will make this actor operational, even if it is not on the screen, which is a desirable result in this case. It also prevents Stencyl from deleting the actor when it leaves the scene, which it would automatically do in an effort to conserve memory, if we did not explicitly dictate otherwise. Click Choose to add it to the behaviors list for this Actor Type. You should see it appear in the list: Click Add Behavior again. This time, under FROM YOUR LIBRARY, go the Motion category once more, and this time select Wave Motion (you'll have to scroll down the list to see it). Click Choose to add it to the behavior stack. You should see it sitting under the Always Simulate behavior: Configuring prefab behaviors Prefab behaviors (also called shipped behaviors) enable us to implement some common functionality, without reinventing the wheel, so to speak. The great thing about these prefab behaviors, which can be found in the behavior library, is that they can be used as templates, and modified at will. Let's learn how to add and modify a couple of these prefab behaviors now. Some prefab behaviors have exposed attributes which can be configured to suit the needs of the project. The Wave Motion behavior is one such example. Select it from the stack, and configure the attributes as follows: Set Direction to Horizontal from the dropdown menu. Set Start Speed to 5. Set Amplitude to 64. Set Wavelength to 128. Fantastic! Now let's add an Instance of the Balloon actor to Level 1: Click the Add to Scene button at the top right corner of your view. Select the Level 1 scene. Select the Balloon. Click on the canvas, below the Cowboy actor, to place an instance of the Balloon in the scene: Modifying prefab behaviors Before we test the game one last time, we must quickly add a prefab behavior to the Cowboy Actor Type, modifying it slightly to suit the needs of this game (for instance, we will need to create an offset value for the y axis, so the PC is not always at the centre of the screen): Go to the Dashboard, and double click on the Cowboy from the Actor Types list. Switch to the Behavior Context. Click Add Behavior, as you did previously when adding prefab behaviors to the Balloon Actor Type. This time, under FROM YOUR LIBRARY, go to the Game category, and select Camera Follow. As the name suggests, this is a simple behavior that makes the camera follow the actor it is attached to. Click Choose to commit this behavior to the stack, and you should see this: Click the Edit Behavior button, and it will open up in the Behavior Designer: In the Behavior Designer, towards the bottom right corner of the screen, click on the Attributes tab: Once clicked, you will see a list of all the attributes in this behavior appear in the previous window. Click the Add Attribute button: Perform the following steps: Set the Name to Y Offset. Change the Type to Number. Leave the attribute unhidden. Click OK to commit new attribute to the attribute stack: We must modify the set IntendedCameraY block in both the Created and the Updated events: Holding Shift, click and drag the set IntendedCameraY block out onto the canvas by itself: Drag the y-center of Self block out like the following: Click the little downward pointing grey arrow at the right of the empty field in the set intendedCameraY block , and browse to Math | Arithmetic | Addition block: Drag the y-center of Self block into the left hand field of the Add block: Next, click the small downward pointing grey arrow to the right of the right hand field of the Addition block to bring up the same menu as before. This time, browse to Attributes, and select Y Offset: Now, right click on the whole block, and select Copy (this will copy it to the clipboard), then simply drag it back into its original position, just underneath set intendedCameraX: Switch to the Updated Event from the events pane on the left, hold Shift, then click and drag set intendedCameraY out of the Always block and drop it in the trash can, as you won't need it anymore. Right-click and select Paste to place a copy of the new block configuration you copied to the clipboard earlier: Click and drag the pasted block so that it appears just underneath the set intendedCameraX block, and save your changes: Testing the changes Go back to the Cowboy Actor Type, and open the Behavior context; click File | Reload Document (Ctrl-R or Cmd-R) to update all the changes. You should see a new configurable attribute for the Camera Follow Behavior, called Y Offset. Set its value to 70: Excellent! Now go back to the Dashboard and perform the following: Open up Level 1 again. Under Physics, set Gravity (Vertical) to 8.0. Click Test Game, and after a few moments, a new game window should appear. At this stage, what you should see is the Cowboy shooting down the hill with the camera following him, and the Balloon floating around above him. Summary In this article, we learned the basics of creating behaviors, adding and setting Attributes of various types, adding and modifying prefab behaviors, and even some rudimentary testing and debugging. Give yourself a pat on the back; you've learned a lot so far! Resources for Article: Further resources on this subject: Form Handling [article] Managing and Displaying Information [article] Background Animation [article]
Read more
  • 0
  • 0
  • 8067

article-image-integrating-d3js-visualization-simple-angularjs-application
Packt
27 Apr 2015
19 min read
Save for later

Integrating a D3.js visualization into a simple AngularJS application

Packt
27 Apr 2015
19 min read
In this article by Christoph Körner, author of the book Data Visualization with D3 and AngularJS, we will apply the acquired knowledge to integrate a D3.js visualization into a simple AngularJS application. First, we will set up an AngularJS template that serves as a boilerplate for the examples and the application. We will see a typical directory structure for an AngularJS project and initialize a controller. Similar to the previous example, the controller will generate random data that we want to display in an autoupdating chart. Next, we will wrap D3.js in a factory and create a directive for the visualization. You will learn how to isolate the components from each other. We will create a simple AngularJS directive and write a custom compile function to create and update the chart. (For more resources related to this topic, see here.) Setting up an AngularJS application To get started with this article, I assume that you feel comfortable with the main concepts of AngularJS: the application structure, controllers, directives, services, dependency injection, and scopes. I will use these concepts without introducing them in great detail, so if you do not know about one of these topics, first try an intermediate AngularJS tutorial. Organizing the directory To begin with, we will create a simple AngularJS boilerplate for the examples and the visualization application. We will use this boilerplate during the development of the sample application. Let's create a project root directory that contains the following files and folders: bower_components/: This directory contains all third-party components src/: This directory contains all source files src/app.js: This file contains source of the application src/app.css: CSS layout of the application test/: This directory contains all test files (test/config/ contains all test configurations, test/spec/ contains all unit tests, and test/e2e/ contains all integration tests) index.html: This is the starting point of the application Installing AngularJS In this article, we use the AngularJS version 1.3.14, but different patch versions (~1.3.0) should also work fine with the examples. Let's first install AngularJS with the Bower package manager. Therefore, we execute the following command in the root directory of the project: bower install angular#1.3.14 Now, AngularJS is downloaded and installed to the bower_components/ directory. If you don't want to use Bower, you can also simply download the source files from the AngularJS website and put them in a libs/ directory. Note that—if you develop large AngularJS applications—you most likely want to create a separate bower.json file and keep track of all your third-party dependencies. Bootstrapping the index file We can move on to the next step and code the index.html file that serves as a starting point for the application and all examples of this section. We need to include the JavaScript application files and the corresponding CSS layouts, the same for the chart component. Then, we need to initialize AngularJS by placing an ng-app attribute to the html tag; this will create the root scope of the application. Here, we will call the AngularJS application myApp, as shown in the following code: <html ng-app="myApp"> <head>    <!-- Include 3rd party libraries -->    <script src="bower_components/d3/d3.js" charset="UTF-   8"></script>    <script src="bower_components/angular/angular.js"     charset="UTF-8"></script>      <!-- Include the application files -->    <script src="src/app.js"></script>    <link href="src/app.css" rel="stylesheet">      <!-- Include the files of the chart component -->    <script src="src/chart.js"></script>    <link href="src/chart.css" rel="stylesheet">   </head> <body>    <!-- AngularJS example go here --> </body> </html> For all the examples in this section, I will use the exact same setup as the preceding code. I will only change the body of the HTML page or the JavaScript or CSS sources of the application. I will indicate to which file the code belongs to with a comment for each code snippet. If you are not using Bower and previously downloaded D3.js and AngularJS in a libs/ directory, refer to this directory when including the JavaScript files. Adding a module and a controller Next, we initialize the AngularJS module in the app.js file and create a main controller for the application. The controller should create random data (that represent some simple logs) in a fixed interval. Let's generate some random number of visitors every second and store all data points on the scope as follows: /* src/app.js */ // Application Module angular.module('myApp', [])   // Main application controller .controller('MainCtrl', ['$scope', '$interval', function ($scope, $interval) {      var time = new Date('2014-01-01 00:00:00 +0100');      // Random data point generator    var randPoint = function() {      var rand = Math.random;      return { time: time.toString(), visitors: rand()*100 };    }      // We store a list of logs    $scope.logs = [ randPoint() ];      $interval(function() {     time.setSeconds(time.getSeconds() + 1);      $scope.logs.push(randPoint());    }, 1000); }]); In the preceding example, we define an array of logs on the scope that we initialize with a random point. Every second, we will push a new random point to the logs. The points contain a number of visitors and a timestamp—starting with the date 2014-01-01 00:00:00 (timezone GMT+01) and counting up a second on each iteration. I want to keep it simple for now; therefore, we will use just a very basic example of random access log entries. Consider to use the cleaner controller as syntax for larger AngularJS applications because it makes the scopes in HTML templates explicit! However, for compatibility reasons, I will use the standard controller and $scope notation. Integrating D3.js into AngularJS We bootstrapped a simple AngularJS application in the previous section. Now, the goal is to integrate a D3.js component seamlessly into an AngularJS application—in an Angular way. This means that we have to design the AngularJS application and the visualization component such that the modules are fully encapsulated and reusable. In order to do so, we will use a separation on different levels: Code of different components goes into different files Code of the visualization library goes into a separate module Inside a module, we divide logics into controllers, services, and directives Using this clear separation allows you to keep files and modules organized and clean. If at anytime we want to replace the D3.js backend with a canvas pixel graphic, we can just implement it without interfering with the main application. This means that we want to use a new module of the visualization component and dependency injection. These modules enable us to have full control of the separate visualization component without touching the main application and they will make the component maintainable, reusable, and testable. Organizing the directory First, we add the new files for the visualization component to the project: src/: This is the default directory to store all the file components for the project src/chart.js: This is the JS source of the chart component src/chart.css: This is the CSS layout for the chart component test/test/config/: This directory contains all test configurations test/spec/test/spec/chart.spec.js: This file contains the unit tests of the chart component test/e2e/chart.e2e.js: This file contains the integration tests of the chart component If you develop large AngularJS applications, this is probably not the folder structure that you are aiming for. Especially in bigger applications, you will most likely want to have components in separate folders and directives and services in separate files. Then, we will encapsulate the visualization from the main application and create the new myChart module for it. This will make it possible to inject the visualization component or parts of it—for example just the chart directive—to the main application. Wrapping D3.js In this module, we will wrap D3.js—which is available via the global d3 variable—in a service; actually, we will use a factory to just return the reference to the d3 variable. This enables us to pass D3.js as a dependency inside the newly created module wherever we need it. The advantage of doing so is that the injectable d3 component—or some parts of it—can be mocked for testing easily. Let's assume we are loading data from a remote resource and do not want to wait for the time to load the resource every time we test the component. Then, the fact that we can mock and override functions without having to modify anything within the component will become very handy. Another great advantage will be defining custom localization configurations directly in the factory. This will guarantee that we have the proper localization wherever we use D3.js in the component. Moreover, in every component, we use the injected d3 variable in a private scope of a function and not in the global scope. This is absolutely necessary for clean and encapsulated components; we should never use any variables from global scope within an AngularJS component. Now, let's create a second module that stores all the visualization-specific code dependent on D3.js. Thus, we want to create an injectable factory for D3.js, as shown in the following code: /* src/chart.js */ // Chart Module   angular.module('myChart', [])   // D3 Factory .factory('d3', function() {   /* We could declare locals or other D3.js      specific configurations here. */   return d3; }); In the preceding example, we returned d3 without modifying it from the global scope. We can also define custom D3.js specific configurations here (such as locals and formatters). We can go one step further and load the complete D3.js code inside this factory so that d3 will not be available in the global scope at all. However, we don't use this approach here to keep things as simple and understandable as possible. We need to make this module or parts of it available to the main application. In AngularJS, we can do this by injecting the myChart module into the myApp application as follows: /* src/app.js */   angular.module('myApp', ['myChart']); Usually, we will just inject the directives and services of the visualization module that we want to use in the application, not the whole module. However, for the start and to access all parts of the visualization, we will leave it like this. We can use the components of the chart module now on the AngularJS application by injecting them into the controllers, services, and directives. The boilerplate—with a simple chart.js and chart.css file—is now ready. We can start to design the chart directive. A chart directive Next, we want to create a reusable and testable chart directive. The first question that comes into one's mind is where to put which functionality? Should we create a svg element as parent for the directive or a div element? Should we draw a data point as a circle in svg and use ng-repeat to replicate these points in the chart? Or should we better create and modify all data points with D3.js? I will answer all these question in the following sections. A directive for SVG As a general rule, we can say that different concepts should be encapsulated so that they can be replaced anytime by a new technology. Hence, we will use AngularJS with an element directive as a parent element for the visualization. We will bind the data and the options of the chart to the private scope of the directive. In the directive itself, we will create the complete chart including the parent svg container, the axis, and all data points using D3.js. Let's first add a simple directive for the chart component: /* src/chart.js */ …   // Scatter Chart Directive .directive('myScatterChart', ["d3", function(d3){      return {      restrict: 'E',      scope: {        },      compile: function( element, attrs, transclude ) {                   // Create a SVG root element        var svg = d3.select(element[0]).append('svg');          // Return the link function        return function(scope, element, attrs) { };      }    }; }]); In the preceding example, we first inject d3 to the directive by passing it as an argument to the caller function. Then, we return a directive as an element with a private scope. Next, we define a custom compile function that returns the link function of the directive. This is important because we need to create the svg container for the visualization during the compilation of the directive. Then, during the link phase of the directive, we need to draw the visualization. Let's try to define some of these directives and look at the generated output. We define three directives in the index.html file, as shown in the following code: <!-- index.html --> <div ng-controller="MainCtrl">   <!-- We can use the visualization directives here --> <!-- The first chart --> <my-scatter-chart class="chart"></my-scatter-chart>   <!-- A second chart --> <my-scatter-chart class="chart"></my-scatter-chart>   <!-- Another chart --> <my-scatter-chart class="chart"></my-scatter-chart>   </div> If we look at the output of the html page in the developer tools, we can see that for each base element of the directive, we created a svg parent element for the visualization: Output of the HTML page In the resulting DOM tree, we can see that three svg elements are appended to the directives. We can now start to draw the chart in these directives. Let's fill these elements with some awesome charts. Implementing a custom compile function First, let's add a data attribute to the isolated scope of the directive. This gives us access to the dataset, which we will later pass to the directive in the HTML template. Next, we extend the compile function of the directive to create a g group container for the data points and the axis. We will also add a watcher that checks for changes of the scope data array. Every time the data changes, we call a draw() function that redraws the chart of the directive. Let's get started: /* src/capp..js */ ... // Scatter Chart Directive .directive('myScatterChart', ["d3", function(d3){        // we will soon implement this function    var draw = function(svg, width, height, data){ … };      return {      restrict: 'E',      scope: {        data: '='      },      compile: function( element, attrs, transclude ) {          // Create a SVG root element        var svg = d3.select(element[0]).append('svg');          svg.append('g').attr('class', 'data');        svg.append('g').attr('class', 'x-axis axis');        svg.append('g').attr('class', 'y-axis axis');          // Define the dimensions for the chart        var width = 600, height = 300;          // Return the link function        return function(scope, element, attrs) {            // Watch the data attribute of the scope          scope.$watch('data', function(newVal, oldVal, scope) {              // Update the chart            draw(svg, width, height, scope.data);          }, true);        };      }    }; }]); Now, we implement the draw() function in the beginning of the directive. Drawing charts So far, the chart directive should look like the following code. We will now implement the draw() function, draw axis, and time series data. We start with setting the height and width for the svg element as follows: /* src/chart.js */ ...   // Scatter Chart Directive .directive('myScatterChart', ["d3", function(d3){      function draw(svg, width, height, data) {      svg        .attr('width', width)        .attr('height', height);      // code continues here }      return {      restrict: 'E',      scope: {        data: '='      },      compile: function( element, attrs, transclude ) { ... } }]); Axis, scale, range, and domain We first need to create the scales for the data and then the axis for the chart. The implementation looks very similar to the scatter chart. We want to update the axis with the minimum and maximum values of the dataset; therefore, we also add this code to the draw() function: /* src/chart.js --> myScatterChart --> draw() */   function draw(svg, width, height, data) { ... // Define a margin var margin = 30;   // Define x-scale var xScale = d3.time.scale()    .domain([      d3.min(data, function(d) { return d.time; }),      d3.max(data, function(d) { return d.time; })    ])    .range([margin, width-margin]);   // Define x-axis var xAxis = d3.svg.axis()    .scale(xScale)    .orient('top')    .tickFormat(d3.time.format('%S'));   // Define y-scale var yScale = d3.time.scale()    .domain([0, d3.max(data, function(d) { return d.visitors; })])    .range([margin, height-margin]);   // Define y-axis var yAxis = d3.svg.axis()    .scale(yScale)    .orient('left')    .tickFormat(d3.format('f'));   // Draw x-axis svg.select('.x-axis')    .attr("transform", "translate(0, " + margin + ")")    .call(xAxis);   // Draw y-axis svg.select('.y-axis')    .attr("transform", "translate(" + margin + ")")    .call(yAxis); } In the preceding code, we create a timescale for the x-axis and a linear scale for the y-axis and adapt the domain of both axes to match the maximum value of the dataset (we can also use the d3.extent() function to return min and max at the same time). Then, we define the pixel range for our chart area. Next, we create two axes objects with the previously defined scales and specify the tick format of the axis. We want to display the number of seconds that have passed on the x-axis and an integer value of the number of visitors on the y-axis. In the end, we draw the axes by calling the axis generator on the axis selection. Joining the data points Now, we will draw the data points and the axis. We finish the draw() function with this code: /* src/chart.js --> myScatterChart --> draw() */ function draw(svg, width, height, data) { ... // Add new the data points svg.select('.data')    .selectAll('circle').data(data)    .enter()    .append('circle');   // Updated all data points svg.select('.data')    .selectAll('circle').data(data)    .attr('r', 2.5)    .attr('cx', function(d) { return xScale(d.time); })    .attr('cy', function(d) { return yScale(d.visitors); }); } In the preceding code, we first create circle elements for the enter join for the data points where no corresponding circle is found in the Selection. Then, we update the attributes of the center point of all circle elements of the chart. Let's look at the generated output of the application: Output of the chart directive We notice that the axes and the whole chart scales as soon as new data points are added to the chart. In fact, this result looks very similar to the previous example with the main difference that we used a directive to draw this chart. This means that the data of the visualization that belongs to the application is stored and updated in the application itself, whereas the directive is completely decoupled from the data. To achieve a nice output like in the previous figure, we need to add some styles to the cart.css file, as shown in the following code: /* src/chart.css */ .axis path, .axis line {    fill: none;    stroke: #999;    shape-rendering: crispEdges; } .tick {    font: 10px sans-serif; } circle {    fill: steelblue; } We need to disable the filling of the axis and enable crisp edges rendering; this will give the whole visualization a much better look. Summary In this article, you learned how to properly integrate a D3.js component into an AngularJS application—the Angular way. All files, modules, and components should be maintainable, testable, and reusable. You learned how to set up an AngularJS application and how to structure the folder structure for the visualization component. We put different responsibilities in different files and modules. Every piece that we can separate from the main application can be reused in another application; the goal is to use as much modularization as possible. As a next step, we created the visualization directive by implementing a custom compile function. This gives us access to the first compilation of the element—where we can append the svg element as a parent for the visualization—and other container elements. Resources for Article: Further resources on this subject: AngularJS Performance [article] An introduction to testing AngularJS directives [article] Our App and Tool Stack [article]
Read more
  • 0
  • 0
  • 7849
article-image-custom-coding-apex
Packt
27 Apr 2015
18 min read
Save for later

Custom Coding with Apex

Packt
27 Apr 2015
18 min read
In this article by Chamil Madusanka, author of the book Learning Force.com Application Development, you will learn about the custom coding in Apex and also about triggers. We have used many declarative methods such as creating the object's structure, relationships, workflow rules, and approval process to develop the Force.com application. The declarative development method doesn't require any coding skill and specific Integrated Development Environment (IDE). This article will show you how to extend the declarative capabilities using custom coding of the Force.com platform. Apex controllers and Apex triggers will be explained with examples of the sample application. The Force.com platform query language and data manipulation language will be described with syntaxes and examples. At the end of the article, there will be a section to describe bulk data handling methods in Apex. This article covers the following topics: Introducing Apex Working with Apex (For more resources related to this topic, see here.) Introducing Apex Apex is the world's first on-demand programming language that allows developers to implement and execute business flows, business logic, and transactions on the Force.com platform. There are two types of Force.com application development methods: declarative developments and programmatic developments. Apex is categorized under the programmatic development method. Since Apex is a strongly-typed, object-based language, it is connected with data in the Force.com platform and data manipulation using the query language and the search language. The Apex language has the following features: Apex provides a lot of built-in support for the Force.com platform features such as: Data Manipulation Language (DML) with the built-in exception handling (DmlException) to manipulate the data during the execution of the business logic. Salesforce Object Query Language (SOQL) and Salesforce Object Search Language (SOSL) to query and retrieve the list of sObjects records. Bulk data processing on multiple records at a time. Apex allows handling errors and warning using an in-built error-handling mechanism. Apex has its own record-locking mechanism to prevent conflicts of record updates. Apex allows building custom public Force.com APIs from stored Apex methods. Apex runs in a multitenant environment. The Force.com platform has multitenant architecture. Therefore, the Apex runtime engine obeys the multitenant environment. It prevents monopolizing of shared resources using the guard with limits. If any particular Apex code violates the limits, error messages will be displayed. Apex is hosted in the Force.com platform. Therefore, the Force.com platform interprets, executes, and controls Apex. Automatically upgradable and versioned: Apex codes are stored as metadata in the platform. Therefore, they are automatically upgraded with the platform. You don't need to rewrite your code when the platform gets updated. Each code is saved with the current upgrade version. You can manually change the version. It is easy to maintain the Apex code with the versioned mechanism. Apex can be used easily. Apex is similar to Java syntax and variables. The syntaxes and semantics of Apex are easy to understand and write codes. Apex is a data-focused programming language. Apex is designed for multithreaded query and DML statements in a single execution context on the Force.com servers. Many developers can use database stored procedures to run multiple transaction statements on the database server. Apex is different from other databases when it comes to stored procedures; it doesn't attempt to provide general support for rendering elements in the user interface. The execution context is one of the key concepts in Apex programming. It influences every aspect of software development on the Force.com platform. Apex is a strongly-typed language that directly refers to schema objects and object fields. If there is any error, it fails the compilation. All the objects, fields, classes, and pages are stored in metadata after successful compilation. Easy to perform unit testing. Apex provides a built-in feature for unit testing and test execution with the code coverage. Apex allows developers to write the logic in two ways: As an Apex class: The developer can write classes in the Force.com platform using Apex code. An Apex class includes action methods which related to the logic implementation. An Apex class can be called from a trigger. A class can be associated with a Visualforce page (Visualforce Controllers/Extensions) or can act as a supporting class (WebService, Email-to-Apex service/Helper classes, Batch Apex, and Schedules). Therefore, Apex classes are explicitly called from different places on the Force.com platform. As a database trigger: A trigger is executed related to a particular database interaction of a Force.com object. For example, you can create a trigger on the Leave Type object that fires whenever the Leave Type record is inserted. Therefore, triggers are implicitly called from a database action. Apex is included in the Unlimited Edition, Developer Edition, Enterprise Edition, Database.com, and Performance Edition. The developer can write Apex classes or Apex triggers in a developer organization or a sandbox of a production organization. After you finish the development of the Apex code, you can deploy the particular Apex code to the production organization. Before you deploy the Apex code, you have to write test methods to cover the implemented Apex code. Apex code in the runtime environment You already know that Apex code is stored and executed on the Force.com platform. Apex code also has a compile time and a runtime. When you attempt to save an Apex code, it checks for errors, and if there are no errors, it saves with the compilation. The code is compiled into a set of instructions that are about to execute at runtime. Apex always adheres to built-in governor limits of the Force.com platform. These governor limits protect the multitenant environment from runaway processes. Apex code and unit testing Unit testing is important because it checks the code and executes the particular method or trigger for failures and exceptions during test execution. It provides a structured development environment. We gain two good requirements for this unit testing, namely, best practice for development and best practice for maintaining the Apex code. The Force.com platform forces you to cover the Apex code you implemented. Therefore, the Force.com platform ensures that you follow the best practices on the platform. Apex governors and limits Apex codes are executed on the Force.com multitenant infrastructure and the shared resources are used across all customers, partners, and developers. When we are writing custom code using Apex, it is important that the Apex code uses the shared resources efficiently. Apex governors are responsible for enforcing runtime limits set by Salesforce. It discontinues the misbehaviors of the particular Apex code. If the code exceeds a limit, a runtime exception is thrown that cannot be handled. This error will be seen by the end user. Limit warnings can be sent via e-mail, but they also appear in the logs. Governor limits are specific to a namespace, so AppExchange certified managed applications have their own set of limits, independent of the other applications running in the same organization. Therefore, the governor limits have their own scope. The limit scope will start from the beginning of the code execution. It will be run through the subsequent blocks of code until the particular code terminates. Apex code and security The Force.com platform has a component-based security, record-based security and rich security framework, including profiles, record ownership, and sharing. Normally, Apex codes are executed as a system mode (not as a user mode), which means the Apex code has access to all data and components. However, you can make the Apex class run in user mode by defining the Apex class with the sharing keyword. The with sharing/without sharing keywords are employed to designate that the sharing rules for the running user are considered for the particular Apex class. Use the with sharing keyword when declaring a class to enforce the sharing rules that apply to the current user. Use the without sharing keyword when declaring a class to ensure that the sharing rules for the current user are not enforced. For example, you may want to explicitly turn off sharing rule enforcement when a class acquires sharing rules after it is called from another class that is declared using with sharing. The profile also can maintain the permission for developing Apex code and accessing Apex classes. The author's Apex permission is required to develop Apex codes and we can limit the access of Apex classes through the profile by adding or removing the granted Apex classes. Although triggers are built using Apex code, the execution of triggers cannot be controlled by the user. They depend on the particular operation, and if the user has permission for the particular operation, then the trigger will be fired. Apex code and web services Like other programming languages, Apex supports communication with the outside world through web services. Apex methods can be exposed as a web service. Therefore, an external system can invoke the Apex web service to execute the particular logic. When you write a web service method, you must use the webservice keyword at the beginning of the method declaration. The variables can also be exposed with the webservice keyword. After you create the webservice method, you can generate the Web Service Definition Language (WSDL), which can be consumed by an external application. Apex supports both Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) web services. Apex and metadata Because Apex is a proprietary language, it is strongly typed to Salesforce metadata. The same sObject and fields that are created through the declarative setup menu can be referred to through Apex. Like other Force.com features, the system will provide an error if you try to delete an object or field that is used within Apex. Apex is not technically autoupgraded with each new Salesforce release, as it is saved with a specific version of the API. Therefore, Apex, like other Force.com features, will automatically work with future versions of Salesforce applications. Force.com application development tools use the metadata. Working with Apex Before you start coding with Apex, you need to learn a few basic things. Apex basics Apex has come up with a syntactical framework. Similar to Java, Apex is strongly typed and is an object-based language. If you have some experience with Java, it will be easy to understand Apex. The following table explains the similarities and differences between Apex and Java: Similarities Differences Both languages have classes, inheritance, polymorphism, and other common object oriented programming features Apex runs in a multitenant environment and is very controlled in its invocations and governor limits Both languages have extremely similar syntax and notations Apex is case sensitive Both languages are compiled, strongly-typed, and transactional Apex is on-demand and is compiled and executed in the cloud   Apex is not a general purpose programming language, but is instead a proprietary language used for specific business logic functions   Apex requires unit testing for deployment into a production environment This section will not discuss everything that is included in the Apex documentation from Salesforce, but it will cover topics that are essential for understanding concepts discussed in this article. With this basic knowledge of Apex, you can create Apex code in the Force.com platform. Apex data types In Apex classes and triggers, we use variables that contain data values. Variables must be bound to a data type and that particular variable can hold the values with the same data type. All variables and expressions have one of the following data types: Primitives Enums sObjects Collections An object created from the user or system-defined classes Null (for the null constant) Primitive data types Apex uses the same primitive data types as the web services API, most of which are similar to their Java counterparts. It may seem that Apex primitive variables are passed by value, but they actually use immutable references, similar to Java string behavior. The following are the primitive data types of Apex: Boolean: A value that can only be assigned true, false, or null. Date, Datetime, and Time: A Date value indicates particular day and not contains any information about time. A Datetime value indicates a particular day and time. A Time value indicates a particular time. Date, Datetime and Time values must always be created with a system static method. ID: 18 or 15 digits version. Integer, Long, Double, and Decimal: Integer is a 32-bit number that does not include decimal points. Integers have a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647. Long is a 64-bit number that does not include a decimal point. Use this datatype when you need a range of values wider than those provided by Integer. Double is a 64-bit number that includes a decimal point. Both Long and Doubles have a minimum value of -263 and a maximum value of 263-1. Decimal is a number that includes a decimal point. Decimal is an arbitrary precision number. String: String is any set of characters surrounded by single quotes. Strings have no limit on the number of characters that can be included. But the heap size limit is used to ensure to the particular Apex program do not grow too large. Blob: Blob is a collection of binary data stored as a single object. Blog can be accepted as Web service argument, stored in a document or sent as attachments. Object: This can be used as the base type for any other data type. Objects are supported for casting. Enum data types Enum (or enumerated list) is an abstract data type that stores one value of a finite set of specified identifiers. To define an Enum, use the enum keyword in the variable declaration and then define the list of values. You can define and use enum in the following way: Public enum Status {NEW, APPROVED, REJECTED, CANCELLED} The preceding enum has four values: NEW, APPROVED, REJECTED, CANCELLED. By creating this enum, you have created a new data type called Status that can be used as any other data type for variables, return types, and method arguments. Status leaveStatus = Status. NEW; Apex provides Enums for built-in concepts such as API error (System.StatusCode). System-defined enums cannot be used in web service methods. sObject data types sObjects (short for Salesforce Object) are standard or custom objects that store record data in the Force.com database. There is also an sObject data type in Apex that is the programmatic representation of these sObjects and their data in code. Developers refer to sObjects and their fields by their API names, which can be found in the schema browser. sObject and field references within Apex are validated against actual object and field names when code is written. Force.com tracks the objects and fields used within Apex to prevent users from making the following changes: Changing a field or object name Converting from one data type to another Deleting a field or object Organization-wide changes such as record sharing It is possible to declare variables of the generic sObject data type. The new operator still requires a concrete sObject type, so the instances are all specific sObjects. The following is a code example: sObject s = new Employee__c(); Casting will be applied as expected as each row knows its runtime type and can be cast back to that type. The following casting works fine: Employee__c e = (Employee__c)s; However, the following casting will generate a runtime exception for data type collision: Leave__c leave = (Leave__c)s; sObject super class only has the ID variable. So we can only access the ID via the sObject class. This method can also be used with collections and DML operations, although only concrete types can be instantiated. Collection will be described in the upcoming section and DML operations will be discussed in the Data manipulation section on the Force.com platform. Let's have a look at the following code: sObject[] sList = new Employee__c[0]; List<Employee__c> = (List<Employee__c>)sList; Database.insert(sList); Collection data types Collection data types store groups of elements of other primitive, composite, or collection data types. There are three different types of collections in Apex: List: A list is an ordered collection of primitives or composite data types distinguished by its index. Each element in a list contains two pieces of information; an index (this is an integer) and a value (the data). The index of the first element is zero. You can define an Apex list in the following way: List<DataType> listName = new List<DataType>(); List<String> sList = new List< String >(); There are built-in methods that can be used with lists adding/removing elements from the end of the list, getting/setting values at a particular index, and sizing the list by obtaining the number of elements. A full set of list methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_list.htm. The Apex list is defined in the following way: List<String> sList = new List< String >(); sList.add('string1'); sList.add('string2'); sList.add('string3'); sList.add('string4'); Integer sListSize = sList.size(); // this will return the   value as 4 sList.get(3); //This method will return the value as   "string4" Apex allows developers familiar with the standard array syntax to use that interchangeably with the list syntax. The main difference is the use of square brackets, which is shown in the following code: String[] sList = new String[4]; sList [0] = 'string1'; sList [1] = 'string2'; sList [2] = 'string3'; sList [3] = 'string4'; Integer sListSize = sList.size(); // this will return the   value as 4 Lists, as well as maps, can be nested up to five levels deep. Therefore, you can create a list of lists in the following way: List<List<String>> nestedList = new List<List<String>> (); Set: A set is an unordered collection of data of one primitive data type or sObjects that must have unique values. The Set methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex230/Content/apex_methods_system_set.htm. Similar to the declaration of List, you can define a Set in the following way: Set<DataType> setName = new Set<DataType>(); Set<String> setName = new Set<String>(); There are built-in methods for sets, including add/remove elements to/from the set, check whether the set contains certain elements, and the size of the set. Map: A map is an unordered collection of unique keys of one primitive data type and their corresponding values. The Map methods are listed in the following link at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_map.htm. You can define a Map in the following way: Map<PrimitiveKeyDataType, DataType> = mapName = new   Map<PrimitiveKeyDataType, DataType>(); Map<Integer, String> mapName = new Map<Integer, String>(); Map<Integer, List<String>> sMap = new Map<Integer,   List<String>>(); Maps are often used to map IDs to sObjects. There are built-in methods that you can use with maps, including adding/removing elements on the map, getting values for a particular key, and checking whether the map contains certain keys. You can use these methods as follows: Map<Integer, String> sMap = new Map<Integer, String>(); sMap.put(1, 'string1'); // put key and values pair sMap.put(2, 'string2'); sMap.put(3, 'string3'); sMap.put(4, 'string4'); sMap.get(2); // retrieve the value of key 2 Apex logics and loops Like all programming languages, Apex language has the syntax to implement conditional logics (IF-THEN-ELSE) and loops (for, Do-while, while). The following table will explain the conditional logic and loops in Apex: IF Conditional IF statements in Apex are similar to Java. The IF-THEN statement is the most basic of all the control flow statements. It tells your program to execute a certain section of code only if a particular test evaluates to true. The IF-THEN-ELSE statement provides a secondary path of execution when an IF clause evaluates to false. if (Boolean_expression){ statement; statement; statement; statement;} else { statement; statement;} For There are three variations of the FOR loop in Apex, which are as follows: FOR(initialization;Boolean_exit_condition;increment) {     statement; }   FOR(variable : list_or_set) {     statement; }   FOR(variable : [inline_soql_query]) {     statement; } All loops allow for the following commands: break: This is used to exit the loop continue: This is used to skip to the next iteration of the loop While The while loop is similar, but the condition is checked before the first loop, as shown in the following code: while (Boolean_condition) { code_block; }; Do-While The do-while loop repeatedly executes as long as a particular Boolean condition remains true. The condition is not checked until after the first pass is executed, as shown in the following code: do { //code_block; } while (Boolean_condition); Summary In this article, you have learned to develop custom coding in the Force.com platform, including the Apex classes and triggers. And you learned two query languages in the Force.com platform. Resources for Article: Further resources on this subject: Force.com: Data Management [article] Configuration in Salesforce CRM [article] Learning to Fly with Force.com [article]
Read more
  • 0
  • 0
  • 11700

article-image-using-basic-projectiles
Packt
27 Apr 2015
22 min read
Save for later

Using Basic Projectiles

Packt
27 Apr 2015
22 min read
"Flying is learning how to throw yourself at the ground and miss."                                                                                              – Douglas Adams In this article by Michael Haungs, author of the book Creative Greenfoot, we will create a simple game using basic movements in Greenfoot. Actors in creative Greenfoot applications, such as games and animations, often have movement that can best be described as being launched. For example, a soccer ball, bullet, laser, light ray, baseball, and firework are examples of this type of object. One common method of implementing this type of movement is to create a set of classes that model real-world physical properties (mass, velocity, acceleration, friction, and so on) and have game or simulation actors inherit from these classes. Some refer to this as creating a physics engine for your game or simulation. However, this course of action is complex and often overkill. There are often simple heuristics we can use to approximate realistic motion. This is the approach we will take here. In this article, you will learn about the basics of projectiles, how to make an object bounce, and a little about particle effects. We will apply what you learn to a small platform game that we will build up over the course of this article. Creating realistic flying objects is not simple, but we will cover this topic in a methodical, step-by-step approach, and when we are done, you will be able to populate your creative scenarios with a wide variety of flying, jumping, and launched objects. It's not as simple as Douglas Adams makes it sound in his quote, but nothing worth learning ever is. (For more resources related to this topic, see here.) Cupcake Counter It is beneficial to the learning process to discuss topics in the context of complete scenarios. Doing this forces us to handle issues that might be elided in smaller, one-off examples. In this article, we will build a simple platform game called Cupcake Counter (shown in Figure 1). We will first look at a majority of the code for the World and Actor classes in this game without showing the code implementing the topic of this article, that is, the different forms of projectile-based movement. Figure 1: This is a screenshot of Cupcake Counter How to play The goal of Cupcake Counter is to collect as many cupcakes as you can before being hit by either a ball or a fountain. The left and right arrow keys move your character left and right and the up arrow key makes your character jump. You can also use the space bar key to jump. After touching a cupcake, it will disappear and reappear randomly on another platform. Balls will be fired from the turret at the top of the screen and fountains will appear periodically. The game will increase in difficulty as your cupcake count goes up. The game requires good jumping and avoiding skills. Implementing Cupcake Counter Create a scenario called Cupcake Counter and add each class to it as they are discussed. The CupcakeWorld class This subclass of World sets up all the actors associated with the scenario, including a score. It is also responsible for generating periodic enemies, generating rewards, and increasing the difficulty of the game over time. The following is the code for this class: import greenfoot.*; import java.util.List;   public class CupcakeWorld extends World { private Counter score; private Turret turret; public int BCOUNT = 200; private int ballCounter = BCOUNT; public int FCOUNT = 400; private int fountainCounter = FCOUNT; private int level = 0; public CupcakeWorld() {    super(600, 400, 1, false);    setPaintOrder(Counter.class, Turret.class, Fountain.class,    Jumper.class, Enemy.class, Reward.class, Platform.class);    prepare(); } public void act() {    checkLevel(); } private void checkLevel() {    if( level > 1 ) generateBalls();    if( level > 4 ) generateFountains();    if( level % 3 == 0 ) {      FCOUNT--;      BCOUNT--;      level++;    } } private void generateFountains() {    fountainCounter--;    if( fountainCounter < 0 ) {      List<Brick> bricks = getObjects(Brick.class);      int idx = Greenfoot.getRandomNumber(bricks.size());      Fountain f = new Fountain();      int top = f.getImage().getHeight()/2 + bricks.get(idx).getImage().getHeight()/2;      addObject(f, bricks.get(idx).getX(),      bricks.get(idx).getY()-top);      fountainCounter = FCOUNT;    } } private void generateBalls() {    ballCounter--;    if( ballCounter < 0 ) {      Ball b = new Ball();      turret.setRotation(15 * -b.getXVelocity());      addObject(b, getWidth()/2, 0);      ballCounter = BCOUNT;    } } public void addCupcakeCount(int num) {    score.setValue(score.getValue() + num);    generateNewCupcake(); } private void generateNewCupcake() {    List<Brick> bricks = getObjects(Brick.class);    int idx = Greenfoot.getRandomNumber(bricks.size());    Cupcake cake = new Cupcake();    int top = cake.getImage().getHeight()/2 +    bricks.get(idx).getImage().getHeight()/2;    addObject(cake, bricks.get(idx).getX(),    bricks.get(idx).getY()-top); } public void addObjectNudge(Actor a, int x, int y) {    int nudge = Greenfoot.getRandomNumber(8) - 4;    super.addObject(a, x + nudge, y + nudge); } private void prepare(){    // Add Bob    Bob bob = new Bob();    addObject(bob, 43, 340);    // Add floor    BrickWall brickwall = new BrickWall();    addObject(brickwall, 184, 400);    BrickWall brickwall2 = new BrickWall();    addObject(brickwall2, 567, 400);    // Add Score    score = new Counter();    addObject(score, 62, 27);    // Add turret    turret = new Turret();    addObject(turret, getWidth()/2, 0);    // Add cupcake    Cupcake cupcake = new Cupcake();    addObject(cupcake, 450, 30);    // Add platforms    for(int i=0; i<5; i++) {      for(int j=0; j<6; j++) {        int stagger = (i % 2 == 0 ) ? 24 : -24;        Brick brick = new Brick();        addObjectNudge(brick, stagger + (j+1)*85, (i+1)*62);      }    } } } Let's discuss the methods in this class in order. First, we have the class constructor CupcakeWorld(). After calling the constructor of the superclass, it calls setPaintOrder() to set the actors that will appear in front of other actors when displayed on the screen. The main reason why we use it here, is so that no actor will cover up the Counter class, which is used to display the score. Next, the constructor method calls prepare() to add and place the initial actors into the scenario. Inside the act() method, we will only call the function checkLevel(). As the player scores points in the game, the level variable of the game will also increase. The checkLevel() function will change the game a bit according to its level variable. When our game first starts, no enemies are generated and the player can easily get the cupcake (the reward). This gives the player a chance to get accustomed to jumping on platforms. As the cupcake count goes up, balls and fountains will be added. As the level continues to rise, checkLevel() reduces the delay between creating balls (BCOUNT) and fountains (FCOUNT). The level variable of the game is increased in the addCupcakeCount() method. The generateFountains() method adds a Fountain actor to the scenario. The rate at which we create fountains is controlled by the delay variable fountainContainer. After the delay, we create a fountain on a randomly chosen Brick (the platforms in our game). The getObjects() method returns all of the actors of a given class presently in the scenario. We then use getRandomNumber() to randomly choose a number between one and the number of Brick actors. Next, we use addObject() to place the new Fountain object on the randomly chosen Brick object. Generating balls using the generateBalls() method is a little easier than generating fountains. All balls are created in the same location as the turret at the top of the screen and sent from there with a randomly chosen trajectory. The rate at which we generate new Ball actors is defined by the delay variable ballCounter. Once we create a Ball actor, we rotate the turret based on its x velocity. By doing this, we create the illusion that the turret is aiming and then firing Ball Actor. Last, we place the newly created Ball actor into the scenario using the addObject() method. The addCupcakeCount() method is called by the actor representing the player (Bob) every time the player collides with Cupcake. In this method, we increase score and then call generateNewCupcake() to add a new Cupcake actor to the scenario. The generateNewCupcake() method is very similar to generateFountains(), except for the lack of a delay variable, and it randomly places Cupcake on one of the bricks instead of a Fountain actor. In all of our previous scenarios, we used a prepare() method to add actors to the scenario. The major difference between this prepare() method and the previous ones, is that we use the addObjectNudge() method instead of addObject() to place our platforms. The addObjectNudge() method simply adds a little randomness to the placement of the platforms, so that every new game is a little different. The random variation in the platforms will cause the Ball actors to have different bounce patterns and require the player to jump and move a bit more carefully. In the call to addObjectNudge(), you will notice that we used the numbers 85 and 62. These are simply numbers that spread the platforms out appropriately, and they were discovered through trial and error. I created a blue gradient background to use for the image of CupcakeWorld. Enemies In Cupcake Counter, all of the actors that can end the game if collided with are subclasses of the Enemy class. Using inheritance is a great way to share code and reduce redundancy for a group of similar actors. However, we often will create class hierarchies in Greenfoot solely for polymorphism. Polymorphism refers to the ability of a class in an object-orientated language to take on many forms. We are going to use it, so that our player actor only has to check for collision with an Enemy class and not every specific type of Enemy, such as Ball or RedBall. Also, by coding this way, we are making it very easy to add code for additional enemies, and if we find that our enemies have redundant code, we can easily move that code into our Enemy class. In other words, we are making our code extensible and maintainable. Here is the code for our Enemy class: import greenfoot.*;   public abstract class Enemy extends Actor { } The Ball class extends the Enemy class. Since Enemy is solely used for polymorphism, the Ball class contains all of the code necessary to implement bouncing and an initial trajectory. Here is the code for this class: import greenfoot.*;   public class Ball extends Enemy { protected int actorHeight; private int speedX = 0; public Ball() {    actorHeight = getImage().getHeight();    speedX = Greenfoot.getRandomNumber(8) - 4;    if( speedX == 0 ) {      speedX = Greenfoot.getRandomNumber(100) < 50 ? -1 : 1;    } } public void act() {    checkOffScreen(); } public int getXVelocity() {    return speedX; } private void checkOffScreen() {    if( getX() < -20 || getX() > getWorld().getWidth() + 20 ) {      getWorld().removeObject(this);    } else if( getY() > getWorld().getHeight() + 20 ) {      getWorld().removeObject(this);    } } } The implementation of Ball is missing the code to handle moving and bouncing. As we stated earlier, we will go over all the projectile-based code after providing the code we are using as the starting point for this game. In the Ball constructor, we randomly choose a speed in the x direction and save it in the speedX instance variable. We have included one accessory method to return the value of speedX (getXVelocity()). Last, we include checkOffScreen() to remove Ball once it goes off screen. If we do not do this, we would have a form of memory leak in our application because Greenfoot will continue to allocate resources and manage any actor until it is removed from the scenario. For the Ball class, I choose to use the ball.png image, which comes with the standard installation of Greenfoot. In this article, we will learn how to create a simple particle effect. Creating an effect is more about the use of a particle as opposed to its implementation. In the following code, we create a generic particle class, Particles, that we will extend to create a RedBall particle. We have organized the code in this way to easily accommodate adding particles in the future. Here is the code: import greenfoot.*;   public class Particles extends Enemy { private int turnRate = 2; private int speed = 5; private int lifeSpan = 50; public Particles(int tr, int s, int l) {    turnRate = tr;    speed = s;    lifeSpan = l;    setRotation(-90); } public void act() {    move();    remove(); } private void move() {    move(speed);    turn(turnRate); } private void remove() {    lifeSpan--;    if( lifeSpan < 0 ) {      getWorld().removeObject(this);    } } } Our particles are implemented to move up and slightly turn each call of the act() method. A particle will move lifeSpan times and then remove itself. As you might have guessed, lifeSpan is another use of a delay variable. The turnRate property can be either positive (to turn slightly right) or negative (to turn slightly left). We only have one subclass of Particles, RedBall. This class supplies the correct image for RedBall, supplies the required input for the Particles constructor, and then scales the image according to the parameters scaleX and scaleY. Here's the implementation: import greenfoot.*;   public class RedBall extends Particles { public RedBall(int tr, int s, int l, int scaleX, int scaleY) {    super(tr, s, l);    getImage().scale(scaleX, scaleY); } } For RedBall, I used the Greenfoot-supplied image red-draught.png. Fountains In this game, fountains add a unique challenge. After reaching level five (see the World class CupcakeWorld), Fountain objects will be generated and randomly placed in the game. Figure 2 shows a fountain in action. A Fountain object continually spurts RedBall objects into the air like water from a fountain. Figure 2: This is a close-up of a Fountain object in the game Cupcake Counter Let's take a look at the code that implements the Fountain class: import greenfoot.*; import java.awt.Color;   public class Fountain extends Actor { private int lifespan = 75; private int startDelay = 100; private GreenfootImage img; public Fountain() {    img = new GreenfootImage(20,20);    img.setColor(Color.blue);    img.setTransparency(100);    img.fill();    setImage(img); } public void act() {    if( --startDelay == 0 ) wipeView();    if( startDelay < 0 ) createRedBallShower(); } private void wipeView() {    img.clear(); } private void createRedBallShower() { } } The constructor for Fountain creates a new blue, semitransparent square and sets that to be its image. We start with a blue square to give the player of the game a warning that a fountain is about to erupt. Since fountains are randomly placed at any location, it would be unfair to just drop one on our player and instantly end the game. This is also why RedBall is a subclass of Enemy and Fountain is not. It is safe for the player to touch the blue square. The startDelay delay variable is used to pause for a short amount of time, then remove the blue square (using the function wipeView()), and then start the RedBall shower (using the createRedBallShower() function). We can see this in the act() method. Turrets In the game, there is a turret in the top-middle of the screen that shoots purple bouncy balls at the player. It is shown in Figure 1. Why do we use a bouncy-ball shooting turret? Because this is our game and we can! The implementation of the Turret class is very simple. Most of the functionality of rotating the turret and creating Ball to shoot is handled by CupcakeWorld in the generateBalls() method already discussed. The main purpose of this class is to just draw the initial image of the turret, which consists of a black circle for the base of the turret and a black rectangle to serve as the cannon. Here is the code: import greenfoot.*; import java.awt.Color;   public class Turret extends Actor { private GreenfootImage turret; private GreenfootImage gun; private GreenfootImage img; public Turret() {    turret = new GreenfootImage(30,30);    turret.setColor(Color.black);    turret.fillOval(0,0,30,30);       gun = new GreenfootImage(40,40);    gun.setColor(Color.black);    gun.fillRect(0,0,10,35);       img = new GreenfootImage(60,60);    img.drawImage(turret, 15, 15);    img.drawImage(gun, 25, 30);    img.rotate(0);       setImage(img); } } We previously talked about the GreenfootImage class and how to use some of its methods to do custom drawing. One new function we introduced is drawImage(). This method allows you to draw one GreenfootImage into another. This is how you compose images, and we used it to create our turret from a rectangle image and a circle image. Rewards We create a Reward class for the same reason we created an Enemy class. We are setting ourselves up to easily add new rewards in the future. Here is the code: import greenfoot.*;   public abstract class Reward extends Actor { } The Cupcake class is a subclass of the Reward class and represents the object on the screen the player is constantly trying to collect. However, cupcakes have no actions to perform or state to keep track of; therefore, its implementation is simple: import greenfoot.*;   public class Cupcake extends Reward { } When creating this class, I set its image to be muffin.png. This is an image that comes with Greenfoot. Even though the name of the image is a muffin, it still looks like a cupcake to me. Jumpers The Jumper class is a class that will allow all subclasses of it to jump when pressing either the up arrow key or the spacebar. At this point, we just provide a placeholder implementation: import greenfoot.*;   public abstract class Jumper extends Actor { protected int actorHeight; public Jumper() {    actorHeight = getImage().getHeight(); } public void act() {    handleKeyPresses(); } protected void handleKeyPresses() { } } The next class we are going to present is the Bob class. The Bob class extends the Jumper class and then adds functionality to let the player move it left and right. Here is the code: import greenfoot.*;   public class Bob extends Jumper { private int speed = 3; private int animationDelay = 0; private int frame = 0; private GreenfootImage[] leftImages; private GreenfootImage[] rightImages; private int actorWidth; private static final int DELAY = 3; public Bob() {    super();       rightImages = new GreenfootImage[5];    leftImages = new GreenfootImage[5];       for( int i=0; i<5; i++ ) {      rightImages[i] = new GreenfootImage("images/Dawson_Sprite_Sheet_0" + Integer.toString(3+i) + ".png");      leftImages[i] = new GreenfootImage(rightImages[i]);      leftImages[i].mirrorHorizontally();    }       actorWidth = getImage().getWidth(); } public void act() {    super.act();    checkDead();    eatReward(); } private void checkDead() {    Actor enemy = getOneIntersectingObject(Enemy.class);    if( enemy != null ) {      endGame();    } } private void endGame() {    Greenfoot.stop(); } private void eatReward() {    Cupcake c = (Cupcake) getOneIntersectingObject(Cupcake.class);    if( c != null ) {      CupcakeWorld rw = (CupcakeWorld) getWorld();      rw.removeObject(c);      rw.addCupcakeCount(1);    } } // Called by superclass protected void handleKeyPresses() {    super.handleKeyPresses();       if( Greenfoot.isKeyDown("left") ) {      if( canMoveLeft() ) {moveLeft();}    }    if( Greenfoot.isKeyDown("right") ) {      if( canMoveRight() ) {moveRight();}    } } private boolean canMoveLeft() {    if( getX() < 5 ) return false;    return true; } private void moveLeft() {    setLocation(getX() - speed, getY());    if( animationDelay % DELAY == 0 ) {      animateLeft();      animationDelay = 0;    }    animationDelay++; } private void animateLeft() {    setImage( leftImages[frame++]);    frame = frame % 5;    actorWidth = getImage().getWidth(); } private boolean canMoveRight() {    if( getX() > getWorld().getWidth() - 5) return false;    return true; } private void moveRight() {    setLocation(getX() + speed, getY());    if( animationDelay % DELAY == 0 ) {      animateRight();      animationDelay = 0;    }    animationDelay++; } private void animateRight() {    setImage( rightImages[frame++]);    frame = frame % 5;    actorWidth = getImage().getWidth(); } } Like CupcakeWorld, this class is substantial. We will discuss each method it contains sequentially. First, the constructor's main duty is to set up the images for the walking animation. The images came from www.wikia.com and were supplied, in the form of a sprite sheet, by the user Mecha Mario. A direct link to the sprite sheet is http://smbz.wikia.com/wiki/File:Dawson_Sprite_Sheet.PNG. Note that I manually copied and pasted the images I used from this sprite sheet using my favorite image editor. Free Internet resources Unless you are also an artist or a musician in addition to being a programmer, you are going to be hard pressed to create all of the assets you need for your Greenfoot scenario. If you look at the credits for AAA video games, you will see that the number of artists and musicians actually equal or even outnumber the programmers. Luckily, the Internet comes to the rescue. There are a number of websites that supply legally free assets you can use. For example, the website I used to get the images for the Bob class supplies free content under the Creative Commons Attribution-Share Alike License 3.0 (Unported) (CC-BY-SA) license. It is very important that you check the licensing used for any asset you download off the Internet and follow those user agreements carefully. In addition, make sure that you fully credit the source of your assets. For games, you should include a Credits screen to cite all the sources for the assets you used. The following are some good sites for free, online assets: www.wikia.com newgrounds.com http://incompetech.com opengameart.org untamed.wild-refuge.net/rpgxp.php Next, we have the act() method. It first calls the act() method of its superclass. It needs to do this so that we get the jumping functionality that is supplied by the Jumper class. Then, we call checkDead() and eatReward(). The checkDead()method ends the game if this instance of the Bob class touches an enemy, and eatReward() adds one to our score, by calling the CupcakeWorld method addCupcakeCount(), every time it touches an instance of the Cupcake class. The rest of the class implements moving left and right. The main method for this is handleKeyPresses(). Like in act(), the first thing we do, is call handleKeyPresses() contained in the Jumper superclass. This runs the code in Jumper that handles the spacebar and up arrow key presses. The key to handling key presses is the Greenfoot method isKeyDown() (see the following information box). We use this method to check if the left arrow or right arrow keys are presently being pressed. If so, we check whether or not the actor can move left or right using the methods canMoveLeft() and canMoveRight(), respectively. If the actor can move, we then call either moveLeft() or moveRight(). Handling key presses in Greenfoot The second tutorial explains how to control actors with the keyboard. To refresh your memory, we are going to present some information on the keyboard control here. The primary method we use in implementing keyboard control is isKeyDown(). This method provides a simple way to check whether a certain key is being pressed. Here is an excerpt from Greenfoot's documentation: public static boolean isKeyDown(java.lang.String keyName) Check whether a given key is currently pressed down.   Parameters: keyName:This is the name of the key to check.   This returns : true if the key is down.   Using isKeyDown() is easy. The ease of capturing and using input is one of the major strengths of Greenfoot. Here is example code that will pause the execution of the game if the "p" key is pressed:   if( Greenfoot.isKeyDown("p") { Greenfoot.stop(); } Next, we will discuss canMoveLeft(), moveLeft(), and animateLeft(). The canMoveRight(), moveRight(), and animateRight()methods mirror their functionality and will not be discussed. The sole purpose of canMoveLeft() is to prevent the actor from walking off the left-hand side of the screen. The moveLeft() method moves the actor using setLocation() and then animates the actor to look as though it is moving to the left-hand side. It uses a delay variable to make the walking speed look natural (not too fast). The animateLeft() method sequentially displays the walking-left images. Platforms The game contains several platforms that the player can jump or stand on. The platforms perform no actions and only serve as placeholders for images. We use inheritance to simplify collision detection. Here is the implementation of Platform: import greenfoot.*;   public class Platform extends Actor { } Here's the implementation of BrickWall: import greenfoot.*;   public class BrickWall extends Platform { } Here's the implementation of Brick: import greenfoot.*;   public class Brick extends Platform { } You should now be able to compile and test Cupcake Counter. Make sure you handle any typos or other errors you introduced while inputting the code. For now, you can only move left and right. Summary We have created a simple game using basic movements in Greenfoot. Resources for Article: Further resources on this subject: A Quick Start Guide to Scratch 2.0 [article] Games of Fortune with Scratch 1.4 [article] Cross-platform Development - Build Once, Deploy Anywhere [article]
Read more
  • 0
  • 0
  • 5898

article-image-resource-manager-centos-6
Packt
27 Apr 2015
19 min read
Save for later

Resource Manager on CentOS 6

Packt
27 Apr 2015
19 min read
In this article is written by Mitja Resman, author of the book CentOS High Availability, we will learn cluster resource management on CentOS 6 with the RGManager cluster resource manager. We will learn how and where to find the information you require about the cluster resources that are supported by RGManager, and all the details about cluster resource configuration. We will also learn how to add, delete, and reconfigure resources and services in your cluster. Then we will learn how to start, stop, and migrate resources from one cluster node to another. When we are done with this article, your cluster will be configured to run and provide end users with a service. (For more resources related to this topic, see here.) Working with RGManager When we work with RGManager, the cluster resources are configured within the /etc/cluster/cluster.conf CMAN configuration file. RGManager has a dedicated section in the CMAN configuration file defined by the <rm> tag. Part of configuration within the <rm> tag belongs to RGManager. The RGManager section begins with the <rm> tag and ends with the </rm> tag. This syntax is common for XML files. The RGManager section must be defined within the <cluster> section of the CMAN configuration file but not within the <clusternodes> or <fencedevices> sections. We will be able to review the exact configuration syntax from the example configuration file provided in the next paragraphs. The following elements can be used within the <rm> RGManager tag: Failover Domain: (tag: <failoverdomains></failoverdomains>): A failover domain is a set of cluster nodes that are eligible to run a specific cluster service in the event of a cluster node failure. More than one failure domain can be configured with different rules applied for different cluster services. Global Resources: (tag: <resources></resources>): Global cluster resources are globally configured resources that can be related when configuring cluster services. Global cluster resources simplify the process of cluster service configuration by global resource name reference. Cluster Service: (tag: <service></service>): A cluster service usually defines more than one resource combined to provide a cluster service. The order of resources provided within a cluster service is important because it defines the resource start and stop order. The most used and important RGManager command-line expressions are as follows: clustat: The clustat command provides cluster status information. It also provides information about the cluster, cluster nodes, and cluster services. clusvcadm: The clusvcadm command provides cluster service management commands such as start, stop, disable, enable, relocate, and others. By default, RGManager logging is configured to log information related to RGManager to the syslog/var/log/messages file. If the logfile parameter in the Corosync configuration file's logging section is configured, information related to RGManager will be logged in the location specified by the logfile parameter. The default RGManager log file is named rgmanager.log. Let's start with the details of RGManager configuration. Configuring failover domains The <rm> tag in the CMAN configuration file usually begins with the definition of a failover domain, but configuring a failover domain is not required for normal operation of the cluster. A failover domain is a set of cluster nodes with configured failover rules. The failover domain is attached to the cluster service configuration; in the event of a cluster node failure, the configured cluster service's failover domain rules are applied. Failover domains are configured within the <rm> RGManager tag. The failover domain configuration begins with the <failoverdomains> tag and ends with the </failoverdomains> tag. Within the <failoverdomains> tag, you can specify one or more failover domains in the following form: <failoverdomain failoverdomainname failoverdomain_options> </failoverdomain> The failoverdomainname parameter is a unique name provided for the failover domain in the form of name="desired_name". The failoverdomain_options options are the rules that we apply to the failover domain. The following rules can be configured for a failover domain: Unrestricted: (parameter: restricted="0"): This failover domain configuration allows you to run a cluster service on any of the configured cluster nodes. Restricted: (parameter: restricted="1"): This failover domain configuration allows you to restrict a cluster service to run on the members you configure. Ordered: (parameter: ordered="1"): This failover domain configuration allows you to configure a preference order for cluster nodes. In the event of cluster node failure, the preference order is taken into account. The order of the listed cluster nodes is important because it is also the priority order. Unordered: (parameter: ordered="0"): This failover domain configuration allows any of the configured cluster nodes to run a specific cluster service. Failback: (parameter: nofailback="0"): This failover domain configuration allows you to configure failback for the cluster service. This means the cluster service will fail back to the originating cluster node once the cluster node is operational. Nofailback: (parameter: nofailback="1"): This failover domain configuration allows you to disable the failback of the cluster service back to the originating cluster node once it is operational. Within the <failoverdomain> tag, the desired cluster nodes are configured with a <failoverdomainnode> tag in the following form: <failoverdomainnode nodename/> The nodename parameter is the cluster node name as provided in the <clusternode> tag of the CMAN configuration file. You can add the following simple failover domain configuration to your existing CMAN configuration file. In the following screenshot, you can see the CMAN configuration file with a simple failover domain configuration. The previous example shows a failover domain named simple with no failback, no ordering, and no restrictions configured. All three cluster nodes are listed as failover domain nodes. Note that it is important to change the config_version parameter in the second line on every CMAN cluster configuration file. Once you have configured the failover domain, you need to validate the cluster configuration file. A valid CMAN configuration is required for normal operation of the cluster. If the validation of the cluster configuration file fails, recheck the configuration file for common typo errors. In the following screenshot, you can see the command used to check the CMAN configuration file for errors: Note that, if a specific cluster node is not online, the configuration file will have to be transferred manually and the cluster stack software will have to be restarted to catch up once it comes back online. Once your configuration is validated, you can propagate it to other cluster nodes. In this screenshot, you can see the CMAN configuration file propagation command used on the node-1 cluster node: For successful CMAN configuration file distribution to the other cluster nodes, the CMAN configuration file's config_version parameter number must be increased. You can confirm that the configuration file was successfully distributed by issuing the ccs_config_dump command on any of the other cluster nodes and comparing the XML output. Adding cluster resources and services The difference between cluster resources and cluster services is that a cluster service is a service built from one or more cluster resources. A configured cluster resource is prepared to be used within a cluster service. When you are configuring a cluster service, you reference a configured cluster resource by its unique name. Resources Cluster resources are defined within the <rm> RGManager tag of the CMAN configuration file. They begin with the <resources> tag and end with the </resources> tag. Within the <resources> tag, all cluster resources supported by RGManager can be configured. Cluster resources are configured with resource scripts, and all RGManager-supported resource scripts are located in the /usr/share/cluster directory along with the cluster resource metadata information required to configure a cluster resource. For some cluster resources, the metadata information is listed within the cluster resource scripts, while others have separate cluster resource metadata files. RGManager reads metadata from the scripts while validating the CMAN configuration file. Therefore, knowing the metadata information is the best way to correctly define and configure a cluster resource. The basic syntax used to configure a cluster resource is as follows: <resource_agent_name resource_options"/> The resource_agent_name parameter is provided in the cluster resource metadata information and is defined as name. The resource_options option is cluster resource-configurable options as provided in the cluster resource metadata information. If you want to configure an IP address cluster resource, you should first review the IP address of the cluster resource metadata information, which is available in the /usr/share/cluster/ip.sh script file. The syntax used to define an IP address cluster resource is as follows: <ip ip_address_options/> We can configure a simple IPv4 IP address, such as 192.168.88.50, and bind it to the eth1 network interface by adding the following line to the CMAN configuration: <ip address="192.168.88.50" family="IPv4" prefer_interface="eth1"/> The address option is the IP address you want to configure. The family option is the address protocol family. The prefer_interface option binds the IP address to the specific network interface. By reviewing the IP address of resource metadata information we can see that a few additional options are configurable and well explained: monitor_link nfslock sleeptime disable_rdisc If you want to configure an Apache web server cluster resource, you should first review the Apache web server resource's metadata information in the /usr/share/cluster/apache.metadata metadata file. The syntax used to define an Apache web server cluster resource is as follows: <apache apache_web_server_options/> We can configure a simple Apache web server cluster resource by adding the following line to the CMAN configuration file: <apache name="apache" server_root="/etc/httpd" config_file="conf/httpd.conf" shutdown_wait="60"/> The name parameter is the unique name provided for the apache cluster resource. The server_root option provides the Apache installation location. If no server_root option is provided, the default value is /etc/httpd. The config_file option is the path to the main Apache web server configuration file from the server_root file. If no config_file option is provided, the default value is conf/httpd.conf. The shutdown_wait option is the number of seconds to wait before the correct end-of-service shutdown. By reviewing the Apache web server resource metadata, you can see that a few additional options are configurable and well explained: httpd httpd_options service_name We can add the IP address and Apache web server cluster resources to the example configuration we are building, as follows. <resources> <ip address="192.168.10.50" family="IPv4"   prefer_interface="eth1"/> <apache name="apache" server_root="/etc/httpd"   config_file="conf/httpd.conf" shutdown_wait="60"/> </resources> Do not forget to increase the config_version parameter number. Make sure that you the validate cluster configuration file with every change. In the following screenshot, you can see the command used to validate the CMAN configuration: After we've validated our configuration, we can distribute the cluster configuration file to other nodes. In this screenshot, you can see the command used to distribute the CMAN configuration file from the node-1 cluster node to other cluster nodes: Services The cluster services are defined within the <rm> RGManager tag of the CMAN configuration file after the cluster resources tag. They begin with the <service> tag and end with the </service> tag. The syntax used to define a service is as follows: <service service_options> </service> The resources within the cluster services are referenced to the globally configured cluster resources. The order of the cluster resources configured within the cluster service is important. This is because it is also a resource start order. The syntax for cluster resource configuration within the cluster service is as follows: <service service_options> <resource_agent_name ref="referenced_cluster_resource_name"/> </service> The service options can be the following: Autostart: (parameter: autostart="1"): This parameter starts services when RGManager starts. By default, RGManager starts all services when it is started and Quorum is present. Noautostart (parameter: autostart="0"): This parameter disables the start of all services when RGManager starts. Restart recovery (parameter: recovery="restart"): This is RGManager's default recovery policy. On failure, RGManager will restart the service on the same cluster node. If the service restart fails, RGManager will relocate the service to another operational cluster node. Relocate recovery (parameter: recovery="relocate"): On failure, RGManager will try to start the service on other operational cluster nodes. Disable recovery (parameter: recovery="disable"): On failure RGManager, will place the service in the disabled state. Restart disable recovery (parameter: recovery="restart-disable"): On failure, RGManager will try to restart the service on the same cluster node. If the restart fails, it will place the service in the disabled state. Additional restart policy extensions are available, as follows: Maximum restarts (parameter: max_restarts="N"; where N is the desired integer value): the maximum restarts parameter is defined by an integer that specifies the maximum number of service restarts before taking additional recovery policy actions Restart expire time (parameter: restart_expire_time="N"; where N is the desired integer value in seconds): The restart expire time parameter is defined by an integer value in seconds, and configures the time to remember a restart event We can configure a web server cluster service with respect to the configured IP address and Apache web server resources with the following CMAN configuration file syntax: <service name="webserver" autostart="1" recovery="relocate"> <ip ref="192.168.88.50"/> <apache ref="apache"/> </service> A minimal configuration of a web server cluster service requires a cluster IP address and an Apache web server resource. The name parameter defines a unique name for the web server cluster service. The autostart parameter defines an automatic start of the webserver cluster service on RGManager startup. The recovery parameter configures the restart of the web server cluster service on other cluster nodes in the event of failure. We can add the web server cluster service to the example CMAN configuration file we are building, as follows. <resources> <ip address="192.168.10.50" family="IPv4"   prefer_interface="eth1"/> <apache name="apache" server_root="/etc/httpd"   config_file="conf/httpd.conf" shutdown_wait="60"/> </resources> <service name="webserver" autostart="1" recovery="relocate"> <ip ref="192.168.10.50"/> <apache ref="apache"/> </service> Do not forget to increase the config_version parameter. Make sure you validate the cluster configuration file with every change. In the following screenshot, we can see the command used to validate the CMAN configuration: After you've validated your configuration, you can distribute the cluster configuration file to other nodes. In this screenshot, we can see the command used to distribute the CMAN configuration file from the node-1 cluster node to other cluster nodes: With the final distribution of the cluster configuration, a cluster service is configured and RGManager starts the cluster service called webserver. You can use the clustat command to check whether the web server cluster service was successfully started and also which cluster node it is running on. In the following screenshot, you can see the clustat command issued on the node-1 cluster node: Let's take a look at the following terms: Service Name: This column defines the name of the service as configured in the CMAN configuration file. Owner: This column lists the node the service is running on or was last running on. State: This column provides information about the status of the service. Managing cluster services Once you have configured the cluster services as you like, you must learn how to manage them. We can manage cluster services with the clusvcadm command and additional parameters. The syntax of the clusvcadm command is as follows: clusvcadm [parameter] With the clusvcadm command, you can perform the following actions: Disable service (syntax: clusvcadm -d <service_name>): This stops the cluster service and puts it into the disabled state. This is the only permitted operation if the service in question is in the failed state. Start service (syntax: clusvcadm -e <service_name> -m <cluster_node>): This starts a non-running cluster service. It optionally provides the cluster node name you would like to start the service on. Relocate service (syntax: clusvcadm -r <service_name> -m <cluster_node>): This stops the cluster service and starts it on a different cluster node as provided with the -m parameter. Migrate service (syntax: clusvcadm -M <service_name> -m <cluster_node>): Note that this applies only to virtual machine live migrations. Restart service (syntax: clusvcadm -R <service_name>): This stops and starts a cluster service on the same cluster node. Stop service (syntax: clusvcadm -s <service_name>): This stops the cluster service and keeps it on the current cluster node in the stopped state. Freeze service (syntax: clusvcadm -Z <service_name>): This keeps the cluster service running on the current cluster node but disables service status checks and service failover in the event of a cluster node failure. Unfreeze service (syntax: clusvcadm -U <service_name>): This takes the cluster service out of the frozen state and enables service status checks and failover. We can continue with the previous example and migrate the webserver cluster service from the currently running node-1 cluster node to the node-3 cluster node. To achieve cluster service relocation, the clusvcadm command with the relocate service parameter must be used, as follows. In the following screenshot, we can see the command issued to migrate the webserver cluster service to the node-3 cluster node: The clusvcadm command is the cluster service command used to administer and manage cluster services. The -r webserver parameter provides information that we need to relocate a cluster service named webserver. The -m node-3 command provides information on where we want to relocate the cluster service. Once the cluster service migration command completes, the webserver cluster service will be relocated to the node-3 cluster node. The clustat command shows that the webserver service is now running on the node-3 cluster node. In this screenshot, we can see that the webserver cluster service was successfully relocated to the node-3 cluster node: We can easily stop the webserver cluster service by issuing the appropriate command. In the following screenshot, we can see the command used to stop the webserver cluster service: The clusvcadm command is the cluster service command used to administer and manage cluster services. The -s webserver parameter provides the information that you require to stop a cluster service named webserver. Another take at the clustat command should show that the webserver cluster service has stopped; it also provides the information that the last owner of the running webserver cluster service is the node-3 cluster node. In this screenshot, we can see the output of the clustat command, showing that the webserver cluster service is running on the node-3 cluster node: If we want to start the webserver cluster service on the node-1 cluster node, we can do this by issuing the appropriate command. In the following screenshot, we can see the command used to start the webserver cluster service on the node-1 cluster node: clusvcadm is the cluster service command used to administer and manage cluster services. The -e webserver parameter provides the information that you need to start a webserver cluster service. The -m node-1 parameter provides the information that you need to start the webserver cluster service on the node-1 cluster node. As expected, another look at the clustat command should make it clear that the webserver cluster service has started on the node-1 cluster node, as follows. In this screenshot, you can see the output of the clustat command, showing that the webserver cluster service is running on the node -1 cluster node: Removing cluster resources and services Removing cluster resources and services is the reverse of adding them. Resources and services are removed by editing the CMAN configuration file and removing the lines that define the resources or services you would like to remove. When removing cluster resources, it is important to verify that the resources are not being used within any of the configured or running cluster services. As always, when editing the CMAN configuration file, the config_version parameter must be increased. Once the CMAN configuration file is edited, you must run the CMAN configuration validation check for errors. When the CMAN configuration file validation succeeds, you can distribute it to all other cluster nodes. The procedure for removing cluster resources and services is as follows: Remove the desired cluster resources and services and increase the config_version number. Validate the CMAN configuration file. Distribute the CMAN configuration file to all other nodes. We can proceed to remove the webserver cluster service from our example cluster configuration. Edit the CMAN configuration file and remove the webserver cluster service definition. Remember to increase the config_version number. Validate your cluster configuration with every CMAN configuration file change. In this screenshot, we can see the command used to validate the CMAN configuration: When your cluster configuration is valid, you can distribute the CMAN configuration file to all other cluster nodes. In the following screenshot, we can see the command used to distribute the CMAN configuration file from the node-1 cluster node to other cluster nodes: Once the cluster configuration is distributed to all cluster nodes, the webserver cluster service will be stopped and removed. The clustat command shows no service configured and running. In the following screenshot, we can see that the output of the clustat command shows no cluster service called webserver existing in the cluster: Summary In this article, you learned how to add and remove cluster failover domains, cluster resources, and cluster services. We also learned how to start, stop, and migrate cluster services from one cluster node to another, and how to remove cluster resources and services from a running cluster configuration. Resources for Article: Further resources on this subject: Replication [article] Managing public and private groups [article] Installing CentOS [article]
Read more
  • 0
  • 0
  • 4162
Packt
27 Apr 2015
9 min read
Save for later

Apache Solr and Big Data – integration with MongoDB

Packt
27 Apr 2015
9 min read
In this article by Hrishikesh Vijay Karambelkar, author of the book Scaling Big Data with Hadoop and Solr - Second Edition, we will go through Apache Solr and MongoDB together. In an enterprise, data is generated from all the software that is participating in day-to-day operations. This data has different formats, and bringing in this data for big-data processing requires a storage system that is flexible enough to accommodate a data with varying data models. A NoSQL database, by its design, is best suited for this kind of storage requirements. One of the primary objectives of NoSQL is horizontal scaling, that is, the P in CAP theorem, but this works at the cost of sacrificing Consistency or Availability. Visit http://en.wikipedia.org/wiki/CAP_theorem to understand more about CAP theorem (For more resources related to this topic, see here.) What is NoSQL and how is it related to Big Data? As we have seen, data models for NoSQL differ completely from that of a relational database. With the flexible data model, it becomes very easy for developers to quickly integrate with the NoSQL database, and bring in large sized data from different data sources. This makes the NoSQL database ideal for Big Data storage, since it demands that different data types be brought together under one umbrella. NoSQL also has different data models, like KV store, document store and Big Table storage. In addition to flexible schema, NoSQL offers scalability and high performance, which is again one of the most important factors to be considered while running big data. NoSQL was developed to be a distributed type of database. When traditional relational stores rely on the high computing power of CPUs and the high memory focused on a centralized system, NoSQL can run on your low-cost, commodity hardware. These servers can be added or removed dynamically from the cluster running NoSQL, making the NoSQL database easier to scale. NoSQL enables most advanced features of a database, like data partitioning, index sharding, distributed query, caching, and so on. Although NoSQL offers optimized storage for big data, it may not be able to replace the relational database. While relational databases offer transactional (ACID), high CRUD, data integrity, and a structured database design approach, which are required in many applications, NoSQL may not support them. Hence, it is most suited for Big Data where there is less possibility of need for data to be transactional. MongoDB at glance MongoDB is one of the popular NoSQL databases, (just like Cassandra). MongoDB supports the storing of any random schemas in the document oriented storage of its own. MongoDB supports the JSON-based information pipe for any communication with the server. This database is designed to work with heavy data. Today, many organizations are focusing on utilizing MongoDB for various enterprise applications. MongoDB provides high availability and load balancing. Each data unit is replicated and the combination of a data with its copes is called a replica set. Replicas in MongoDB can either be primary or secondary. Primary is the active replica, which is used for direct read-write operations, while the secondary replica works like a backup for the primary. MongoDB supports searches by field, range queries, and regular expression searches. Queries can return specific fields of documents and also include user-defined JavaScript functions. Any field in a MongoDB document can be indexed. More information about MongoDB can be read at https://www.mongodb.org/. The data on MongoDB is eventually consistent. Apache Solr can be used to work with MongoDB, to enable database searching capabilities on a MongoDB-based data store. Unlike Cassandra, where the Solr indexes are stored directly in Cassandra through solandra, MongoDB integration with Solr brings in the indexes in the Solr-based optimized storage. There are various ways in which the data residing in MongoDB can be analyzed and searched. MongoDB's replication works by recording all operations made on a database in a log file, called the oplog (operation log). Mongo's oplog keeps a rolling record of all operations that modify the data stored in your databases. Many of the implementers suggest reading this log file using a standard file IO program to push the data directly to Apache Solr, using CURL, SolrJ. Since oplog is a collection of data with an upper limit on maximum storage, it is feasible to synch such querying with Apache Solr. Oplog also provides tailable cursors on the database. These cursors can provide a natural order to the documents loaded in MongoDB, thereby, preserving their order. However, we are going to look at a different approach. Let's look at the schematic following diagram: In this case, MongoDB is exposed as a database to Apache Solr through the custom database driver. Apache Solr reads MongoDB data through the DataImportHandler, which in turns calls the JDBC-based MongoDB driver for connecting to MongoDB and running data import utilities. Since MongoDB supports replica sets, it manages the distribution of data across nodes. It also supports Sharding just like Apache Solr. Installing MongoDB To install MongoDB in your development environment, please follow the following steps: Download the latest version of MongoDB from https://www.mongodb.org/downloads for your supported operating system. Unzip the zipped folder. MongoDB comes up with a default set of different command-line components and utilities:      bin/mongod: The database process.      bin/mongos: Sharding controller.      bin/mongo: The database shell (uses interactive JavaScript). Now, create a directory for MongoDB, which it will use for user data creation and management, and run the following command to start the single node server: $ bin/mongod –dbpath <path to your data directory> --rest In this case, --rest parameter enables support for simple rest APIs that can be used for getting the status. Once the server is started, access http://localhost:28017 from your favorite browser, you should be able to see following administration status page: Now that you have successfully installed MongoDB, try loading a sample data set from the book on MongoDB by opening a new command-line interface. Change the directory to $MONGODB_HOME and run the following command: $ bin/mongoimport --db solr-test --collection zips --file "<file-dir>/samples/zips.json" Please note that the database name is solr-test. You can see the stored data using the MongoDB-based CLI by running the following set of commands from your shell: $ bin/mongo MongoDB shell version: 2.4.9 connecting to: test Welcome to the MongoDB shell. For interactive help, type "help". For more comprehensive documentation, see        http://docs.mongodb.org/ Questions? Try the support group        http://groups.google.com/group/mongodb-user > use test Switched to db test > show dbs exampledb       0.203125GB local   0.078125GB test   0.203125GB > db.zips.find({city:"ACMAR"}) { "city" : "ACMAR", "loc" : [ -86.51557, 33.584132 ], "pop" : 6055, "state" :"AL", "_id" : "35004" } Congratulations! MongoDB is installed successfully Creating Solr indexes from MongoDB To run MongoDB as a database, you will need a JDBC driver built for MongoDB. However, the Mongo-JDBC driver has certain limitations, and it does not work with the Apache Solr DataImportHandler. So, I have extended Mongo-JDBC to work under the Solr-based DataImportHandler. The project repository is available at https://github.com/hrishik/solr-mongodb-dih. Let's look at the setting-up procedure for enabling MongoDB based Solr integration: You may not require a complete package from the solr-mongodb-dih repository, but just the jar file. This can be downloaded from https://github.com/hrishik/solr-mongodb-dih/tree/master/sample-jar. You will also need the following additional jar files:      jsqlparser.jar      mongo.jar These jars are available with the book Scaling Big Data with Hadoop and Solr, Second Edition for download. In your Solr setup, copy these jar files into the library path, that is, the $SOLR_WAR_LOCATION/WEB-INF/lib folder. Alternatively, point your container classpath variable to link them up. Using simple Java source code DataLoad.java (link https://github.com/hrishik/solr-mongodb-dih/blob/master/examples/DataLoad.java), populate the database with some sample schema and tables that you will use to load in Apache Solr. Now create a data source file (data-source-config.xml) as follows: <dataConfig> <dataSource name="mongod" type="JdbcDataSource" driver="com.mongodb. jdbc.MongoDriver" url="mongodb://localhost/solr-test"/> <document>    <entity name="nameage" dataSource="mongod" query="select name, price from grocery">        <field column="name" name="name"/>        <field column="name" name="id"/>        <!-- other files -->    </entity> </document> </dataConfig> Copy the solr-dataimporthandler-*.jar from your contrib directory to a container/application library path. Modify $SOLR_COLLECTION_ROOT/conf/solr-config.xml with DIH entry: <!-- DIH Starts --> <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">    <lst name="defaults">    <str name="config"><path to config>/data-source-config.xml</str>    </lst> </requestHandler>    <!-- DIH ends --> Once this configuration is done, you are ready to test it out. Access http://localhost:8983/solr/dataimport?command=full-import from your browser to run the full import on Apache Solr, where you will see that your import handler has successfully ran, and has loaded the data in Solr store, as shown in the following screenshot: You can validate the content created by your new MongoDB DIH by accessing the Solr Admin page, and running a query: Using this connector, you can perform operations for full-import on various data elements. Since MongoDB is not a relational database, it does support join queries. However, it supports selects, order by, and so on. Summary In this article, we have understood the distributed aspects of any enterprise search where went through Apache Solr and MongoDB together. Resources for Article: Further resources on this subject: Evolution of Hadoop [article] In the Cloud [article] Learning Data Analytics with R and Hadoop [article]
Read more
  • 0
  • 0
  • 9767

article-image-project-management
Packt
24 Apr 2015
17 min read
Save for later

Project Management

Packt
24 Apr 2015
17 min read
In this article by Patrick Li, author of the book JIRA Essentials - Third Edition, we will start with a high-level view of the overall hierarchy on how data is structured in JIRA. We will then take a look at the various user interfaces that JIRA has for working with projects, both as an administrator and an everyday user. We will also introduce permissions for the first time in the context of projects and will expand on this. In this article, you will learn the following: How JIRA structures content Different user interfaces for project management in JIRA How to create new projects in JIRA How to import data from other systems into JIRA How to manage and configure a project How to manage components and versions (For more resources related to this topic, see here.) The JIRA hierarchy Like most other information systems, JIRA organizes its data in a hierarchical structure. At the lowest level, we have field, which are used to hold raw information. Then the next level up, we have issue, which are like a unit of work to be performed. An issue will belong to a project, which defines the context of the issue. Finally, we have project category, which logically group similar projects together. The figure in the following illustrates the hierarchy we just talked about: Project category Project category is a logical grouping for projects, usually of similar nature. Project category is optional. Projects do not have to belong to any category in JIRA. When a project does not belong to any category, it is considered uncategorized. The categories themselves do not contain any information; they serve as a way to organize all your projects in JIRA, especially when you have many of them. Project In JIRA, a project is a collection of issues. Projects provide the background context for issues by letting users know where issues should be created. Users will be members of a project, working on issues in the project. Most configurations in JIRA, such as permissions and screen layouts, are all applied on the project level. It is important to remember that projects are not limited to software development projects that need to deliver a product. They can be anything logical, such as the following: Company department or team Software development projects Products or systems A risk register Issue Issues represent work to be performed. From a functional perspective, an issue is the base unit for JIRA. Users create issues and assign them to other people to be worked on. Project leaders can generate reports on issues to see how everything is tracking. In a sense, you can say JIRA is issue-centric. Here, you just need to remember three things: An issue can belong to only one project There can be many different types of issues An issue contains many fields that hold values for the issue Field Fields are the most basic unit of data in JIRA. They hold data for issues and give meaning to them. Fields in JIRA can be broadly categorized into two distinctive categories, namely, system fields and custom fields. They come in many different forms, such as text fields, drop-down lists, and user pickers. Here, you just need to remember three things: Fields hold values for issues Fields can have behaviors (hidden or mandatory) Fields can have a view and structure (text field or drop-down list) Project permissions Before we start working with projects in JIRA, we need to first understand a little bit about permissions. We will briefly talk about the permissions related to creating and deleting, administering, and browsing projects. In JIRA, users with the JIRA administrator permission will be able to create and delete projects. By default, users in the jira-administrators group have this permission, so the administrator user we created during the installation process will be able to create new projects. We will be referring to this user and any other users with this permission as JIRA Administrator. For any given project, users with the Administer Project permission for that project will be able to administer the project's configuration settings. This allows them to update the project's details, manage versions and components, and decide who will be able to access this project. We will be referring to users with this permission as the Project Administrator. By default, the JIRA Administrator will have this permission. If a user needs to browse the contents of a given project, then he must have the Browse Project permission for that project. This means that the user will have access to the Project Browser interface for the project. By default, the JIRA Administrator will have this permission. As you have probably realized already, one of the key differences in the three permissions is that the JIRA Administrator's permission is global, which means it is global across all projects in JIRA. The Administer Project and Browse Project permissions are project-specific. A user may have the Administer Project permission for project A, but only Browse Project permission for project B. As we will see the separation of permissions allows you to set up your JIRA instance in such a way that you can effectively delegate permission controls, so you can still centralize control on who can create and delete projects, but not get over-burdened with having to manually manage each project on its own settings. Now with this in mind, let's first take a quick look at JIRA from the JIRA Administrator user's view. Creating projects To create a new project, the easiest way is to select the Create Project menu option from the Projects drop-down menu from the top navigation bar. This will bring up the create project dialog. Note that, as we explained, you need to be a JIRA Administrator (such as the user we created during installation) to create projects. This option is only available if you have the permission. When creating a new project in JIRA, we need to first select the type of project we want to create, from a list of project templates. Project template, as the name suggests, acts as a blueprint template for the project. Each project template has a predefined set of configurations such as issue type and workflows. For example, if we select the Simple Issue Tracking project template, and click on the Next button. JIRA will show us the issue types and workflow for the Simple Issue Tracking template. If we select a different template, then a different set of configurations will be applied. For those who have been using JIRA since JIRA 5 or earlier, JIRA Classic is the template that has the classic issue types and classic JIRA workflow. Clicking on the Select button will accept and select the project template. For the last step, we need to provide the new project's details. JIRA will help you validate the details, such as making sure the project key conforms to the configured format. After filling in the project details, click on the Submit button to create the new project. The following table lists the information you need to provide when creating a new project: Field Description Name A unique name for the project. Key A unique identity key for the project. As you type the name of your project, JIRA will auto-fill the key based on the name, but you can change the autogenerated key with one of your own. Starting from JIRA 6.1, the project key is not changeable after the project is created. The project key will also become the first part of the issue key for issues created in the project. Project Lead The lead of the project can be used to auto-assign issues. Each project can have only one lead. This option is available only if you have more than one user in JIRA. Changing the project key format When creating new projects, you may find that the project key needs to be in a specific format and length. By default, the project key needs to adhere to the following criteria: Contain at least two characters Cannot be more than 10 characters in length Contain only characters, that is, no numbers You can change the default format to have less restrictive rules. These changes are for advanced users only. First, to change the project key length perform the following steps: Browse to the JIRA Administration console. Select the System tab and then the General Configuration option. Click on the Edit Settings button. Change the value for the Maximum project key size option to a value between 2 and 255 (inclusive), and click on the Update button to apply the change. Changing the project key format is a bit more complicated. JIRA uses a regular expression to define what the format should be. To change the project key format use the following steps: Browse to the JIRA Administration console. Select the System tab and then the General Configuration option. Click on the Advanced Settings button. Hover over and click on the value (([A-Z][A-Z]+)) for the jira.projectkey.pattern option. Enter the new regular expression for the project key format, and click on Update. There are a few rules when it comes to setting the project key format: The key must start with a letter All letters must be uppercase, that is (A-Z) Only letters, numbers, and the underscore character can be used Importing data into JIRA JIRA supports importing data directly from many popular issue-tracking systems, such as Bugzilla, GitHub, and Trac. All the importers have a wizard-driven interface, guiding you through a series of steps. These steps are mostly identical with few differences. Generally speaking, there are four steps when importing data into JIRA that are as follows: Select your source data. For example, if you are importing from CSV, it will select the CSV file. If you are importing from Bugzilla, it will provide Bugzilla database details. Select a destination project where imported issues will go into. This can be an existing project or a new project created on the fly. Map old system fields to JIRA fields. Map old system field values to JIRA field values. This is usually required for select-based fields, such as the priority field, or select list custom fields. Importing data through CSV JIRA comes with a CSV importer, which lets you import data in the comma-separated value format. This is a useful tool if you want to import data from a system that is not directly supported by JIRA, since most systems are able to export their data in CSV. It is recommended to do a trial import on a test instance first. Use the following steps to import data through a CSV file: Select the Import External Project option from the Projects drop-down menu. Click on the Import from Comma-separated Values (CSV) importer option. This will start the import wizard. First you need to select the CSV file that contains the data you want to import, by clicking on the Choose File button. After you selected the source file, you can also expand the Advanced section to select the file encoding and delimiter used in the CSV file. There is also a Use an existing configuration option, we will talk about this later in this section. Click on the Next button to proceed. For the second step, you need to select the project you want to import our data into. You can also select the Create New option to create a new project on the fly. If your CSV file contains date-related data, make sure you enter the format used in the Date format field. Click on the Next button to proceed. For the third step, you need to map the CSV fields to the fields in JIRA. Not all fields need to be mapped. If you do not want to import a particular field, simply leave it as Don't map this field for the corresponding JIRA field selection. For fields that contain data that needs to be mapped manually, such as for select list fields, you need to check the Map field value option. This will let you map the CSV field value to the JIRA field value, so they can be imported correctly. If you do not manually map these values, they will be copied over as is. Click on the Next button to proceed. For the last step, you need to map the CSV field value to the JIRA field value. This step is only required if you have checked the Map field value option for a field in step 10. Enter the JIRA field value for each CSV field value. Once you are done with mapping field values, click on the Begin Import button to start the actual import process. Depending on the size of your data, this may take some time to complete. Once the import process completes, you will get a confirmation message that tells you the number of issues that have been imported. This number should match the number of records you have in the CSV file. On the last confirmation screen, you can click on the download a detailed log link to download the full log file containing all the information for the import process. This is particularly useful if the import was not successful. You can also click on the save the configuration link, which will generate a text file containing all the mappings you have done for this import. Even if you need to run a similar import in the future, you can use this import file so that you will not need to manually re-map everything again. To use this configuration file, check the Use an existing configuration file option in step one. As we can see, JIRA's project importer makes importing data from other systems simple and straightforward. However, you must not underestimate its complexity. For any data migration, especially if you are moving off one platform and onto a new one, such as JIRA, there are a number of factors you need to consider and prepare for. The following list summarizes some of the common tasks for most data migrations: Evaluate the size and impact. This includes how many records you will be importing and also the number of users that will be impacted by this. Perform a full gap analysis between the old system and JIRA, such as how the fields will map from one to the other. Set up test environments for you to run test imports on to make sure you have your mappings done correctly. Involve your end users as early as possible, and have them review your test results. Prepare and communicate any outages and support procedure post-migration. Project user interfaces There are two distinctive interfaces for projects in JIRA. The first interface is designed for everyday users, providing useful information on how the project is going with graphs and statistics, called Project Browser. The second interface is designed for project administrators to control project configuration settings, such as permissions and workflows, called Project Administration. The Help Desk project In this exercise, we will be setting up a project for our support teams: A new project category for all support teams A new project for our help desk support team Components for the systems supported by the team Versions to better manage issues created by users Creating a new project category Let's start by creating a project category. We will create a category for all of our internal support teams and their respective support JIRA projects. Please note that this step is optional as JIRA does not require any project to belong to a project category: Log in to JIRA with a user who has JIRA Administrator's permission. Browse to the JIRA administration console. Select the Projects tab and Project Categories. Fill in the fields as shown in the following screenshot. Click on Add to create the new project category. Creating a new project Now that we have a project category created, let's create a project for our help desk support team. To create a new project, perform the following steps: Bring up the create project dialog by selecting the Create Project option from the Projects drop-down menu. Select the Simple Issue Tracking project template. Name our new project as Global Help Desk and accept the other default values for Key and Project Lead. Click on the Submit button to create the new project. You should now be taken to the Project Browser interface of your new project. Assigning a project to a category Having created the new project, you need to assign the new project to your project category, and you can do this from the Project Administration interface: Select the Administration tab. Click on the None link next to Category, on the top left of the page, right underneath the project's name. Select the new Support project category we just created. Click on Select to assign the project. Creating new components As discussed in the earlier sections, components are subsections of a project. This makes logical sense for a software development project, where each component will represent a software deliverable module. For other types of project, components may first appear useless or inappropriate. It is true that components are not for every type of project out there, and this is the reason why you are not required to have them by default. Just like everything else in JIRA, all the features come from how you can best map them to your business needs. The power of a component is more than just a flag field for an issue. For example, let's imagine that the company you are working for has a range of systems that need to be supported. These may range from phone systems and desktop computers to other business applications. Let's also assume that our support team needs to support all of the systems. Now, that is a lot of systems to support. To help manage and delegate support for these systems, we will create a component for each of the systems that the help desk team supports. We will also assign a lead for each of the components. This setup allows us to establish a structure where the Help Desk project is led by the support team lead, and each component is led by their respective system expert (who may or may not be the same as the team lead). This allows for a very flexible management process when we start wiring in other JIRA features, such as notification schemes: From the Project Administration interface, select the Components tab. Type Internal Phone System for the new component's name. Provide a short description for the new component. Select a user to be the lead of the component. Click on Add to create the new component. Add a few more components. Putting it together Now that you have fully prepared your project, let's see how everything comes together by creating an issue. If everything is done correctly, you should see a dialog box similar to the next screenshot, where you can choose your new project to create the issue in and also the new components that are available for selection: Click on the Create button from the top navigation bar. This will bring up the Create Issue dialog box. Select Global Help Desk for Project. Select Task for Issue Type, and click on the Next button. Fill in the fields with some dummy data. Note that the Component/s field should display the components we just created. Click on the Create button to create the issue. You can test out the default assignee feature by leaving the Assignee field as Automatic, select a component, and JIRA will automatically assign the issue to the default assignee defined for the component. If everything goes well, the issue will be created in the new project. Summary In this article, we looked at one of the most important concepts in JIRA, projects, and how to create and manage them. Permissions were introduced for the first time, and we looked at three permissions that are related to creating and deleting, administering, and browsing projects. We were introduced to the two interfaces JIRA provides for project administrators and everyday users, the Project Administration interface and Project Browser interface, respectively. Resources for Article: Further resources on this subject: Securing your JIRA 4 [article] Advanced JIRA 5.2 Features [article] Validating user input in workflow transitions [article]
Read more
  • 0
  • 0
  • 2761
Modal Close icon
Modal Close icon