Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-sage-tips-and-tricks
Packt
17 May 2011
6 min read
Save for later

Sage: Tips and Tricks

Packt
17 May 2011
6 min read
  Sage Beginner's Guide Unlock the full potential of Sage for simplifying and automating mathematical computing         Read more about this book       (For more resources related to this topic, see here.) Calling the reset() function Tip: If you start getting strange results from your calculations, you may have accidentally re-defined a built-in function or constant. Try calling the reset() function and running the calculation again. Remember that reset will delete any variables or functions that you may have defined, so your calculation will have to start over from the beginning.   The value of variable i Tip: Although the variable i is often used as a loop counter, the default value of i in Sage is the square root of negative one. Remember that you can use the command restore('i') to restore i to its default value.   Calling Maxima directly Tip: Sage uses Maxima, an open-source computer algebra system, to handle many symbolic calculations. You can interact directly with Maxima from a Sage worksheet or the interactive shell by using the maxima object. For example, the following command will factor an expression using Maxima: F = maxima.factor('x^5 - y^5')   The factor function Tip: The factor function in Sage is used to factor both polynomials and integers. This behaviour is different from Mathematica, where Factor[] is used to factor polynomials and FactorInteger[] is used to factorize integers.   Logarithms in Sage Tip: The log function in Sage assumes the base of the logarithm is e. If you want to use a different base (such as 10), use the optional argument with keyword base to specify the base. For example: log(x, base=10)   Specifying colors in Sage Tip: There are several ways to specify a color in Sage. For basic colors, you can use a string containing the name of the color, such as red or blue. You can also use a tuple of three floating-point values between 0 and 1.0. The first value is the amount of red, the second is the amount of green, and the third is the amount of blue. For example, the tuple (0.5, 0.0, 0.5) represents a medium purple color.   Organizing code blocks Tip: If you find a block of code occurring more than once in your program, stop and move that block of code to a function. Duplicate blocks of code will make your programs harder to read and more prone to bugs.   The for statement Tip: Don't forget to put a colon at the end of the for statement! Remember to consistently indent every statement in the loop body.   Manipulating the data in an object Tip: As you start using objects, you may be frustrated by the lack of direct access to the data. You may find yourself tempted to avoid using the methods defined by the object, and directly manipulate the data in the object. This defeats the purpose of using objects! If the methods seem to be hindering your use of the object, you probably aren't using them right. Take another look at the documentation and examples, and re-think your approach.   Items of different types in a list Tip: The items in a list usually have the same type. Technically, it is possible to mix types in a list, but this is generally not a good idea for keeping your code organized and readable. If the need arises to use items of different types, it may be better to use a dictionary.   Ordered dictionaries Tip: Python 2.7 and versions above 3.1.3 contain a new class called OrderedDict, which works just like an ordinary dictionary except that it remembers the order in which items were inserted. This class is not available in Sage 4.6.1 because Sage is still using Python 2.6, but it should be available soon.   Runtime errors Tip: if statements are not ideal for catching runtime errors. Exceptions are a much more elegant way to deal with runtime errors.   Using exceptions correctly Tip: The whole idea of using exceptions is to make it easier to identify and handle specific runtime errors in your programs. You defeat the purpose of using exceptions if you place too many lines of code in a try block, because then it's hard to tell which statement raised the exception. It's also a bad idea to have a bare except: statement that doesn't specify the exception type that is being caught. This syntax will catch any type of exception, including SystemExit and KeyboardInterrupt exceptions, making it hard to terminate a misbehaving program. It's also considered bad practice to catch an exception without properly handling it, as this practice can mask errors.   reload a module after making changes Tip: Let's say you created a module called tank.py and used import tank to make its names available in a Sage script, or on the Sage command line. During testing, you found and fixed a bug, and saved the module file. However, Sage won't recognize that you changed anything unless you use the command reload(tank) to force it to reload the module. When working with multiple modules in a package, you may need to import a module on the command line (or in a worksheet cell) before reloading it.   SAGE_BROWSER settings Tip: Sage can be used with LaTeX to typeset complex mathematical formulae and save the results as PDF or DVI files. If you have set the SAGE_BROWSER environment variable to force Sage to use a particular web browser, you might have trouble viewing PDF or DVI files in an external viewer. If this occurs, unset SAGE_BROWSER, and change the default web browser for your operating system so that Sage will use the correct browser.   Optimizing innermost loops Tip: Many numerical algorithms consist of nested loops. The statements in the innermost loop are executed more times than statements in the outer loops, so you will get the most "bang for your buck" by focusing your optimization efforts on the innermost loop. When loops are nested, the code in the innermost loop executes most often. When a calculation needs to run fast, you will get the greatest speed increase by optimizing the code in the innermost loop.   Summary In this article we took a look at some tips and tricks for working with Sage and using Python more effectively. Further resources on this subject: Sage: 3D Data Plotting [Article] Plotting Data with Sage [Article] Creating Line Graphs in R [Article] What Can You Do with Sage Math? [Article] Python Multimedia: Enhancing Images [Article] Python Multimedia: Fun with Animations using Pyglet [Article]
Read more
  • 0
  • 0
  • 2012

article-image-foreword-microsoft-dynamics-sure-step-practitioners
Packt
17 May 2011
1 min read
Save for later

Foreword by Microsoft Dynamics Sure Step Practitioners

Packt
17 May 2011
1 min read
  Microsoft Dynamics Sure Step 2010 The smart guide to the successful delivery of Microsoft Dynamics Business Solutions         Read more about this book       (For more resources on this subject, see here.) "Investing in a business application—be it managing one's customers, tracking inventory, coordinating global resources, or just being able to get real-time visibility to cash flow—has never been so important. Gone are the days when companies invested in business applications, such as CRM and ERP, to simply streamline their supply chain or manage their sales pipeline. And gone are the days when these business applications were selected, implemented, and deployed by the IT organizations alone. Companies, and individuals within them, are relying on these business solutions to provide them a competitive advantage—an advantage that includes not only using the facts and data to generate information, but also to transform it to the knowledge that can be applied to gain a deeper understanding of the environment and provide a reliable business operating system for enabled intuition. This intuition of where to invest, how to plan, and when to execute in a well-planned, analysis-rich, and coordinated manner is what provides a competitive advantage to today's organizations. The expectations of business transformation that business solutions can provide through product or service innovation, customer delight, and operational efficiency are making it even more critical to "get it right" and "provide the business backbone". Sales, marketing, operations, and services are joining the finance and IT organizations to enable this collaborative change. We need to ask ourselves what we can do to not only provide this competitive advantage to our customers, but also to provide a solution to our customers, for them to be able to manage their own customers and businesses with better decision making. When Microsoft decided to invest in a methodology for Microsoft Dynamics solutions, there was one goal in mind—provide our customers with a Microsoft Dynamics purchase, implementation, and an ongoing experience that is unparalleled in the business solutions industry. We determined that we needed a Sure Step way to achieve this customer experience—an experience that is predicated on learning from successful implementations, and equally from the ones that went sideways due to a lack of integrated due diligence and execution approach. Sure Step provides our partners, our value-added resellers (VARs), our independent software vendors (ISVs), and Microsoft Consulting Services and field teams, with valuable guidance on people, process, and technology aspects that need to come together in a timely, predictable, and disciplined manner to help our prospects and eventual customers "get it right". Microsoft Dynamics Sure Step is the culmination and ongoing journey to make this vision and experience real. Are we indeed investing in the success of our customers, and through that the success of the Microsoft eco-systems of partners and ISVs, keeping these principles in mind? I have always believed (and known from first-hand experience!) that getting into college is only the first part of the arduous life-changing experience. Getting through college with the right skills, social temperament, informed career choices, and maybe, having fun through the experience, is often the most critical success factor for sustainable lifestyle. Investing in a business application such as Microsoft Dynamics CRM or one of the Microsoft Dynamics ERP products is not dissimilar. Making that right license purchase of software or signing up for the subscription of one of our online solutions is the key; making sure that the software indeed helps guide our customers to ensure their business success and meet their business goals is more critical. Understanding whether the solution is being analyzed, designed, developed, deployed, and eventually adopted and operated in context of the specific industry, with the right level of individual empowerment, in a relevant yet scalable manner to grow with the company, and eventually feel enamored and positively transformed by the experience, is what ensures success. Are we thinking about the customer investment and relationship we develop as transactional events, or as a strategic relationship we wish to develop and watch our customers graduate successfully from the implementation of the solution to reaping the rewards of their due diligence and implementation? For our partners, Microsoft Services, and IT organizations of our customers, understanding the fundamental principles of any methodology, applying that framework to one's business, and driving adoption of a familiar albeit new way of managing customer expectations requires de-mystifying the method behind the perceived madness! It also becomes critical for each of you to understand how you can use the power and persuasion of Sure Step to not only adapt it to the needs of your organization, but also for the specifics of the customer engagement that you are managing, and as a result help provide you a competitive advantage against the other business applications that may provide the capabilities but may not provide the "customer-focused" approach to lifecycle management. Are you willing to invest time and effort in putting more discipline and accountability into the commitment that you are making for your customers' successes? Chandru Shankar and Vincent Bellefroid have been loyal thought-leaders, advocates, and evangelists of Microsoft Dynamics Sure Step from the day we embarked on this journey of on-time, on-spec, on-budget Microsoft Dynamics engagements. Chandru Shankar has tapped into his extensive experience working in the partner channel implementing business solutions, and through the architecture of Microsoft Dynamics Sure Step, the deep insights, best-practice values, and the easy-to-comprehend guidance on why Microsoft Dynamics Sure Step recommends what to be done by whom, when, and how. He delves into the details and helps understand the value proposition of Sure Step not only from a sales or implementation perspective, but also ensuring that our customers are getting the most out of their investment now, and forever. The "brain behind the brawn" makes it an enjoyable journey (yes, for a methodology read!) through self discovery and relevant research that will hit close to home for many of you. Vincent Bellefroid has extensive experience dealing with the accolades and brickbats associated with going fearlessly where only the best and bravest readiness, adoption, and training experts can venture. He demystifies how you can embark on a journey of Sure Step adoption, and eventual excellence, within your organizations, by applying some time-tested techniques including Project and Change Management, real-life sales and deployment scenarios, and a roadmap of your success through structured roadmaps. It is hard for me think of a more qualified team to land the message, value, and approach of Microsoft Dynamics Sure Step for our business solutions-focused, business-savvy audiences. Business-ready organizations are looking to unleash the power of their Microsoft Dynamics investments as they look to drive better decisions, based on operationally efficient business solutions. These organizations have managed their businesses to date. Can they now measure and improve? Do they have the solutions, people, and processes adopted, deployed, and executed in a manner that helps them drive the shift towards integrated end-to-end business management? This book will provide the understanding and approach you need to measure your success through the success of your customers and their business solutions." Aditya Mohan - Director, Product Management, Microsoft Dynamics Sure Step   "One of the most important avenues to a partner's business success—both short and long term—is their ability to manage customer expectations and deliver high quality solutions on time, on budget, and on spec. Sure Step encompasses a number of tools and guidance that enable partners to do just that—helping them drive profitable projects along with customer satisfaction and loyalty at the same time. Partners with a proven methodology have a distinct competitive advantage, by offering customers peace-of-mind. We have been observing an increasing number of prospects asking for Sure Step-capable partners, so we absolutely recommend that existing as well as prospective Microsoft Dynamics partners adopt Sure Step. As an added benefit, partners will, instead of spending valuable resources developing and maintaining their own methodology, take full advantage of Microsoft's ongoing investments to make Sure Step even more comprehensive and robust. Partners who want to add their own flavor to Sure Step have the opportunity to do exactly that, by treating Sure Step as a methodology platform and developing "the last mile" themselves, much like ISVs build differentiating solutions on top of our ERP and CRM applications. No matter how a partner plans to leverage Sure Step, this book should help not only explain what Sure Step is about, but also how to get it implemented and adopted within the partner's organization." Anders Spatzek - Director, Microsoft Dynamics Services & Partner Readiness   "Global organizations are typically geographically dispersed, and possess cross-functional teams with varying skill sets in different regions. Business solutions delivery for such organizations requires the ability to manage requirements and schedules, dictated by multiple forces. Also, influencers and power brokers can easily create scope creep and other issues to derail these important initiatives. A consistent methodology and taxonomy is an absolute must for dealing with the pulls and demands across these organizations, to ensure that the project stays on course. Global delivery typically necessitates the involvement of multiple delivery teams, from the customer, to Microsoft, to partner organizations. Regardless of who owns the delivery of these engagements, it is of paramount importance that all the delivery resources are performing to the "same sheet of music". This is also where it is essential to have a common and consistent framework of delivery. For our practice, Microsoft Dynamics Sure Step is the tool to ensure success not only for our global practice, but more importantly for our customers and partners. We require that our consulting organization is adept with the methodology, advocating certification on the methodology, and also selecting partners who can work well within these parameters. This book will be an additional asset to help our delivery resources understand the core principles behind the methodology." Kundan Prakash - Director Business Solutions, Microsoft Services Global Delivery   "Providing Microsoft's entrepreneurial partners and customers with industry best practices is vital for ensuring successful business growth. Microsoft Dynamics Sure Step is one of those tools that save time on implementations with the added benefit of bringing together the communication between a sales team and a consulting practice! Stocked with a multitude of templates aligned to a phased implementation process, you can find the right tools to use at each stage of a customer engagement. In delivering the best knowledge to a global group of partners, Microsoft seeks out top business partners to provide insight and create new content that aligns to Microsoft product releases and industry direction. The result is a tool that brings over 800 pages of project management based guidance along with more than 700 templates, samples, and links to Microsoft resources. As Sure Step can fit to any size of project, product line, a number of industry solutions, as well as both pre- and post-implementation activities, a new Dynamics team will benefit from guidance that will get them started down the right path to adopting Sure Step and applying it to their customer's lifecycles. This book is sure to find its way to the front of many consultants' bookshelves as the go-to reference for optimizing their use of Microsoft Dynamics Sure Step." Lori Thalmann Pytlik - Sure Step R&D Manager   "Successful ERP and CRM implementations are dependent as much on the product itself, as they are on the people and processes used to implement them. Accordingly, ERP and CRM sales processes are successful when, besides proving ease-of-use and showing relevant product feature sets, they help build confidence in the minds of the customers that a well-defined path exists to get their vision and objectives materialized. Simply put, Microsoft Dynamics Sure Step is the tool that provides the confidence in the pre-sales cycle and assurance during the delivery, which makes a difference. For our Microsoft Dynamics practice in Microsoft Consulting Services (MCS), we require all our consultants and project managers to be fully proficient and certified in Microsoft Dynamics Sure Step methodology. This helps us in maintaining the high rate of customer satisfaction that we have in this business, as well as providing for an agile and responsive workforce that speaks the same language regardless of the project they are on, or at what point in the lifecycle of a project they were introduced. This book does a great job in not only detailing out what Sure Step is, but how to best use it in various pre-sales and delivery situations to provide the confidence, consistency, and predictability in execution, so that it becomes one of the core differentiators." Muhammad Alam - Dynamics US CTO, Microsoft Consulting Services Further resources on this subject: Installing Microsoft Dynamics NAV [Article] Planning: Microsoft Dynamics GP System [Article] Microsoft Dynamics GP: Data Management [Article] Securing Dynamics NAV Applications [Article] Installing the Dynamics AX Base Server Components for Microsoft [Article] Fine-tuning the SQL Server database for Dynamics NAV [Article]
Read more
  • 0
  • 0
  • 1397

article-image-python-using-doctest-documentation
Packt
13 May 2011
6 min read
Save for later

Python: Using doctest for Documentation

Packt
13 May 2011
6 min read
  Python Testing Cookbook Over 70 simple but incredibly effective recipes for taking control of automated testing using powerful Python testing tools The reader can benefit from the previous article on Testing in Python using doctest.   Documenting the basics Python provides out-of-the-box capability to put comments in code known as docstrings. Docstrings can be read when looking at the source and also when inspecting the code interactively from a Python shell. In this recipe, we will demonstrate how these interactive docstrings can be used as runnable tests. What does this provide? It offers easy-to-read code samples for the users. Not only are the code samples readable, they are also runnable, meaning we can ensure the documentation stays up to date. How to do it... With the following steps, we will create an application combined with runnable docstring comments, and see how to execute these tests: Create a new file named recipe16.py to contain all the code we write for this recipe. Create a function that converts base-10 numbers to any other base using recursion. def convert_to_basen(value, base): import math def _convert(remaining_value, base, exp): def stringify(value): if value > 9: return chr(value + ord('a')-10) else: return str(value) if remaining_value >= 0 and exp >= 0: factor = int(math.pow(base, exp)) if factor <= remaining_value: multiple = remaining_value / factor return stringify(multiple) + _convert(remaining_value-multiple*factor, base, exp-1) else: return "0" + _convert(remaining_value, base, exp-1) else: return "" return "%s/%s" % (_convert(value, base, int(math.log(value, base))), base) Add a docstring just below the external function, as shown in the highlighted section of the following code. This docstring declaration includes several examples of using the function. def convert_to_basen(value, base): """Convert a base10 number to basen.edur >>> convert_to_basen(1, 2) '1/2' >>> convert_to_basen(2, 2) '10/2' >>> convert_to_basen(3, 2) '11/2' >>> convert_to_basen(4, 2) '100/2' >>> convert_to_basen(5, 2) '101/2' >>> convert_to_basen(6, 2) '110/2' >>> convert_to_basen(7, 2) '111/2' >>> convert_to_basen(1, 16) '1/16' >>> convert_to_basen(10, 16) 'a/16' >>> convert_to_basen(15, 16) 'f/16' >>> convert_to_basen(16, 16) '10/16' >>> convert_to_basen(31, 16) '1f/16' >>> convert_to_basen(32, 16) '20/16' """ import math Add a test runner block that invokes Python's doctest module. if __name__ == "__main__": import doctest doctest.testmod()   From an interactive Python shell, import the recipe and view its documentation. Run the code from the command line. In the next screenshot, notice how nothing is printed. This is what happens when all the tests pass. Run the code from the command line with -v to increase verbosity. In the following screenshot, we see a piece of the output, showing what was run and what was expected. This can be useful when debugging doctest. How it works... The doctest module looks for blocks of Python inside docstrings and runs it like real code. >>> is the same prompt we see when we use the interactive Python shell. The following line shows the expected output. doctest runs the statements it sees and then compares the actual with the expected output. There's more... doctest is very picky when matching expected output with actual results. An extraneous space or tab can cause things to break. Structures like dictionaries are tricky to test, because Python doesn't guarantee the order of items. On each test run, the items could be stored in a different order. Simply printing out a dictionary is bound to break it. It is strongly advised not to include object references in expected outputs. These values also vary every time the test is run.   Catching stack traces It's a common fallacy to write tests only for successful code paths. We also need to code against error conditions including the ones that generate stack traces. With this recipe, we will explore how stack traces are pattern-matched in doc testing that allows us to confirm expected errors. How to do it... With the following steps, we will see how to use doctest to verify error conditions: Create a new file called recipe17.py to write all our code for this recipe. Create a function that converts base 10 numbers to any other base using recursion. def convert_to_basen(value, base): import math def _convert(remaining_value, base, exp): def stringify(value): if value > 9: return chr(value + ord('a')-10) else: return str(value) if remaining_value >= 0 and exp >= 0: factor = int(math.pow(base, exp)) if factor <= remaining_value: multiple = remaining_value / factor return stringify(multiple) + _convert(remaining_value-multiple*factor, base, exp-1) else: return "0" + _convert(remaining_value, base, exp-1) else: return "" return "%s/%s" % (_convert(value, base, int(math.log(value, base))), base) Add a docstring just below the external function declaration that includes two examples that are expected to generate stack traces. def convert_to_basen(value, base): """Convert a base10 number to basen. >>> convert_to_basen(0, 2) Traceback (most recent call last): ... ValueError: math domain error >>> convert_to_basen(-1, 2) Traceback (most recent call last): ... ValueError: math domain error """ import math Add a test runner block that invokes Python's doctest module. if __name__ == "__main__": import doctest doctest.testmod() Run the code from the command line. In the following screenshot, notice how nothing is printed. This is what happens when all the tests pass. Run the code from the command line with -v to increase verbosity. In the next screenshot, we can see that 0 and -1 generate math domain errors. This is due to using math.log to find the starting exponent. How it works... The doctest module looks for blocks of Python inside docstrings and runs it like real code. >>>; is the same prompt we see when we use the interactive Python shell. The following line shows the expected output. doctest runs the statements it sees and then compares the actual output with the expected output. With regard to stack traces, there is a lot of detailed information provided in the stack trace. Pattern matching the entire trace is ineffective. By using the ellipsis, we are able to skip the intermediate parts of the stack trace and just match on the distinguishing part: ValueError: math domain error. This is valuable, because our users can see not only the way it handles good values, but will also observe what errors to expect when bad values are provided.  
Read more
  • 0
  • 0
  • 2946

article-image-testing-python-using-doctest
Packt
13 May 2011
8 min read
Save for later

Testing in Python using doctest

Packt
13 May 2011
8 min read
Python Testing Cookbook Coding a test harness for doctest The doctest module supports creating objects, invoking methods, and checking results. With this recipe, we will explore this in more detail. An important aspect of doctest is that it finds individual instances of docstrings, and runs them in a local context. Variables declared in one docstring cannot be used in another docstring. The reader can benefit from the previous article on Python: Using doctest for Documentation. How to do it... Create a new file called recipe19.py to contain the code from this recipe. Write a simple shopping cart application. class ShoppingCart(object): def __init__(self): self.items = [] def add(self, item, price): self.items.append(Item(item, price)) return self def item(self, index): return self.items[index-1].item def price(self, index): return self.items[index-1].price def total(self, sales_tax): sum_price = sum([item.price for item in self.items]) return sum_price*(1.0 + sales_tax/100.0) def __len__(self): return len(self.items) class Item(object): def __init__(self, item, price): self.item = item self.price = price Insert a docstring at the top of the module, before the ShoppingCart class declaration. """ This is documentation for the this entire recipe. With it, we can demonstrate usage of the code. >>> cart = ShoppingCart().add("tuna sandwich", 15.0) >>> len(cart) 1 >>> cart.item(1) 'tuna sandwich' >>> cart.price(1) 15.0 >>> print round(cart.total(9.25), 2) 16.39 """ class ShoppingCart(object): ... Run the recipe using -m doctest and -v for verbosity. Copy all the code we just wrote from recipe19.py into a new file called recipe19b.py. Inside recipe19b.py add another docstring to item, which attempts to re-use the cart variable defined at the top of the module. def item(self, index): """ >>> cart.item(1) 'tuna sandwich' """ return self.items[index-1].item Run this variant of the recipe. Why does it fail? Wasn't cart declared in the earlier docstring? How it works... The doctest module looks for every docstring. For each docstring it finds, it creates a shallow copy of the module's global variables and then runs the code and checks results. Apart from that, every variable created is locally scoped and then cleaned up when the test is complete. This means that our second docstring that was added later cannot see the cart that was created in our first docstring. That is why the second run failed. There is no equivalent to a setUp method as we used with some of the unittest recipes. If there is no setUp option with doctest, then what value is this recipe? It highlights a key limitation of doctest that all developers must understand before using it. There's more... The doctest module provides an incredibly convenient way to add testability to our documentation. But this is not a substitute for a full-fledged testing framework, like unittest. As noted earlier, there is no equivalent to a setUp. There is also no syntax checking of the Python code embedded in the docstrings. Mixing the right level of doctests with unittest (or other testing framework we pick) is a matter of judgment. Filtering out test noise Various options help doctest ignore noise, such as whitespace, in test cases. This can be useful, because it allows us to structure the expected outcome in a better way, to ease reading for the users. We can also flag some tests that can be skipped. This can be used where we want to document known issues, but haven't yet patched the system. Both of these situations can easily be construed as noise, when we are trying to run comprehensive testing, but are focused on other parts of the system. In this recipe, we will dig in to ease the strict checking done by doctest. We will also look at how to ignore entire tests, whether it's on a temporary or permanent basis. How to do it... With the following steps, we will experiment with filtering out test results and easing certain restrictions of doctest. Create a new file called recipe20.py to contain the code from this recipe. Create a recursive function that converts base10 numbers into other bases. def convert_to_basen(value, base): import math def _convert(remaining_value, base, exp): def stringify(value): if value > 9: return chr(value + ord('a')-10) else: return str(value) if remaining_value >= 0 and exp >= 0: factor = int(math.pow(base, exp)) if factor <= remaining_value: multiple = remaining_value / factor return stringify(multiple) + _convert(remaining_value-multiple*factor, base, exp-1) else: return "0" + _convert(remaining_value, base, exp-1) else: return "" return "%s/%s" % (_convert(value, base, int(math.log(value, base))), base) Add a docstring that includes a test to exercise a range of values as well as documenting a future feature that is not yet implemented. def convert_to_basen(value, base): """Convert a base10 number to basen. >>> [convert_to_basen(i, 16) for i in range(1,16)] #doctest: +NORMALIZE_WHITESPACE ['1/16', '2/16', '3/16', '4/16', '5/16', '6/16', '7/16', '8/16', '9/16', 'a/16', 'b/16', 'c/16', 'd/16', 'e/16', 'f/16'] FUTURE: Binary may support 2's complement in the future, but not now. >>> convert_to_basen(-10, 2) #doctest: +SKIP '0110/2' """ import math Add a test runner. if __name__ == "__main__": import doctest doctest.testmod() Run the test case in verbose mode. Copy the code from recipe20.py into a new file called recipe20b.py. Edit recipe20b.py by updating the docstring to include another test exposing that our function doesn't convert 0. def convert_to_basen(value, base): """Convert a base10 number to basen. >>> [convert_to_basen(i, 16) for i in range(1,16)] #doctest: +NORMALIZE_WHITESPACE ['1/16', '2/16', '3/16', '4/16', '5/16', '6/16', '7/16', '8/16', '9/16', 'a/16', 'b/16', 'c/16', 'd/16', 'e/16', 'f/16'] FUTURE: Binary may support 2's complement in the future, but not now. >>> convert_to_basen(-10, 2) #doctest: +SKIP '0110/2' BUG: Discovered that this algorithm doesn't handle 0. Need to patch it. TODO: Renable this when patched. >>> convert_to_basen(0, 2) '0/2' """ import math Run the test case. Notice what is different about this version of the recipe; and why does it fail? Copy the code from recipe20b.py into a new file called recipe20c.py. Edit recipe20c.py and update the docstring indicating that we will skip the test for now. def convert_to_basen(value, base): """Convert a base10 number to basen. >>> [convert_to_basen(i, 16) for i in range(1,16)] #doctest: +NORMALIZE_WHITESPACE ['1/16', '2/16', '3/16', '4/16', '5/16', '6/16', '7/16', '8/16', '9/16', 'a/16', 'b/16', 'c/16', 'd/16', 'e/16', 'f/16'] FUTURE: Binary may support 2's complement in the future, but not now. >>> convert_to_basen(-10, 2) #doctest: +SKIP '0110/2' BUG: Discovered that this algorithm doesn't handle 0. Need to patch it. TODO: Renable this when patched. >>> convert_to_basen(0, 2) #doctest: +SKIP '0/2' """ import math Run the test case. How it works... In this recipe, we revisit the function for converting from base-10 to any base numbers. The first test shows it being run over a range. Normally, Python would fit this array of results on one line. To make it more readable, we spread the output across two lines. We also put some arbitrary spaces between the values to make the columns line up better. This is something that doctest definitely would not support, due to its strict pattern matching nature. By using #doctest: +NORMALIZE_WHITESPACE, we are able to ask doctest to ease this restriction. There are still constraints. For example, the first value in the expected array cannot have any whitespace in front of it. But wrapping the array to the next line no longer breaks the test. We also have a test case that is really meant as documentation only. It indicates a future requirement that shows how our function would handle negative binary values. By adding #doctest: +SKIP, we are able to command doctest to skip this particular instance. Finally, we see the scenario where we discover that our code doesn't handle 0. As the algorithm gets the highest exponent by taking a logarithm, there is a math problem. We capture this edge case with a test. We then confirm that the code fails in classic test driven design (TDD) fashion. The final step would be to fix the code to handle this edge case. But we decide, in a somewhat contrived fashion, that we don't have enough time in the current sprint to fix the code. To avoid breaking our continuous integration (CI) server, we mark the test with a TO-DO statement and add #doctest: +SKIP. There's more... Both the situations that we have marked up with #doctest: +SKIP, are cases where eventually we will want to remove the SKIP tag and have them run. There may be other situations where we will never remove SKIP. Demonstrations of code that have big fluctuations may not be readily testable without making them unreadable. For example, functions that return dictionaries are harder to test, because the order of results varies. We can bend it to pass a test, but we may lose the value of documentation to make it presentable to the reader.  
Read more
  • 0
  • 0
  • 6651

article-image-sage-3d-data-plotting
Packt
09 May 2011
6 min read
Save for later

Sage: 3D Data Plotting

Packt
09 May 2011
6 min read
  Sage Beginner's Guide Unlock the full potential of Sage for simplifying and automating mathematical computing        Time for action – make an interactive 3D plot Let's make an interactive 3D plot. var('x, y') p3d = plot3d(y^2 + 1 - x^3 - x, (x, -pi, pi), (y, -pi, pi)) p3d.show() If you run this example in the notebook interface, a Java applet called Jmol will run in the cell below the code. If you run it from the interactive shell, Jmol will launch as a stand-alone application. Clicking and dragging on the figure with the left mouse button will rotate the plot in 3D space. Clicking and dragging with the centre button, or moving the scroll wheel, zooms in and out. Right-clicking brings up a menu that allows you to set various options for Jmol. Since Jmol is also used to visualize the 3D structures of molecules, some of the options are not relevant for plotting functions. Here is a screenshot of the function, plotted with Jmol: What just happened? We made a cool 3D plot that allowed us to explore a function of two variables. When running Jmol as an applet in a worksheet, you can click on the "Get Image" link below the plot to save an image of the plot in its current state. However, the image quality is not particularly high because it is saved in JPEG format. When Jmol is called from the command line, it runs as a stand-alone application, and more options are available. You can save files in JPEG, GIF, PPM, PNG, or PDF format. Note that the PDF format is a bitmap embedded in a PDF file, rather than a true vector representation of the surface. The syntax for using plot3d is very simple: plot3d(f(x,y), (x, x_min, x_max), (y, y_min, y_max)) There are a few optional arguments to the show method that you can use to alter the appearance of the plot. Setting mesh=True plots a mesh on the surface, and setting dots=True plots a small sphere at each point. You can also use the transformation keyword argument to apply a transformation to the data—see the plot3d documentation for more information. Higher quality output We can improve the quality of saved images using ray tracing, which is an algorithm for generating images that is based on optical principles. Sage comes with ray tracing software called Tachyon, which can be used to view 3D plots. To activate Tachyon, use the show method with the viewer keyword as shown below: p3d.show(viewer='tachyon', frame=False, axes=True) Depending on the speed of your computer, the ray tracing may require a few seconds to a few minutes. The frame keyword selects whether or not to draw a box around the outer limits of the plot, while the axes keyword determines whether or not the axes are drawn. Parametric 3D plotting Sage can also plot functions of two variables that are defined in terms of a parameter. You can make very complex surfaces in this way. Time for action – parametric plots in 3D We will plot two interlocking rings to demonstrate how complex surfaces are easily plotted using three functions of two parameters: var('u, v') f1 = (4 + (3 + cos(v)) * sin(u), 4 + (3 + cos(v)) * cos(u), 4 + sin(v)) f2 = (8 + (3 + cos(v)) * cos(u), 3 + sin(v), 4 + (3 + cos(v)) * sin(u)) p1 = parametric_plot3d(f1, (u, 0, 2 * pi), (v, 0, 2 * pi), texture="red") p2 = parametric_plot3d(f2, (u, 0, 2 * pi), (v, 0, 2 * pi), texture="blue") combination = p1 + p2 combination.show() The result should look like this: What just happened? We made a very complex 3D shape using the parametric_plot3d function. The optional arguments for this function are the same as the options for the plot3d function. Contour plots Sage can also make contour plots, which are 2-D representations of 3-D surfaces. While 3D plots are eye-catching, a 2D plot can be a more practical way to convey information about the function or data set. Time for action – making some contour plots The following code will demonstrate four different ways to make a 2D plot of a 3D surface with Sage: var('x, y') text_coords = (2, -3.5) cp = contour_plot(y^2 + 1 - x^3 - x, (x, -3, 3), (y, -3, 3), contours=8, linewidths=srange(0.5, 4.0, 0.5), fill=False, labels=True, label_colors='black', cmap='gray', colorbar=False) cp += text("Contour", text_coords) ip = implicit_plot(y^2 + 1 - x^3 - x, (x, -3, 3), (y, -3, 3)) ip += text("Implicit", text_coords) rp = region_plot(y^2 + 1 - x^3 - x < 0, (x, -3, 3), (y, -3, 3), incol=(0.8, 0.8, 0.8)) # color is an (R,G,B) tuple rp += text("Region", text_coords) dp = density_plot(y^2 + 1 - x^3 - x, (x, -3, 3), (y, -3, 3)) dp += text("Density", text_coords) show(graphics_array([cp, ip, rp, dp], 2, 2), aspect_ratio=1, figsize=(6, 6)) The output should be as follows: What just happened? The plots we made demonstrate four different ways of visualizing the function we plotted in the previous example. All four functions follow the same syntax as plot3d: contour_plot(f(x,y), (x, x_min, x_max), (y, y_min, y_max)) contour_plot plots level curves on the surface. In other words, z is constant on each curve. implicit_plot does the same thing, but only plots the curve where z=0. region_plot determines the curve for which z=0, and then fills in the region where z<0. Finally, density_plot converts the z value of the function to a color value and plots a color map of the z values over the x-y plane. We used the contour plot to demonstrate some of the keyword arguments that can be used to control the appearance of the plot. Here is a summary of the options we used: KeywordDescriptioncontoursThe number of contours to drawlinewidthsA list of line widths, corresponding to the number of contoursfillTrue to fill in between the contourslabelsTrue to label each contourlabel_colorsColor to use for labelscmapColor map to use for contour linescolorbarTrue to display the a scale bar showing the color map Summary In this article we learned about making three-dimensional plots and contour plots. Further resources on this subject: Plotting Data with Sage [Article] Creating Line Graphs in R [Article] Graphical Capabilities of R [Article] What Can You Do with Sage Math? [Article] Python Multimedia: Enhancing Images [Article] Python Multimedia: Fun with Animations using Pyglet [Article]
Read more
  • 0
  • 0
  • 10616

article-image-plotting-data-sage
Packt
05 May 2011
14 min read
Save for later

Plotting Data with Sage

Packt
05 May 2011
14 min read
Sage Beginner's Guide Unlock the full potential of Sage for simplifying and automating mathematical computing Confusion alert: Sage plots and matplotlib The 2D plotting capabilities of Sage are built upon a Python plotting package called matplotlib. The most widely used features of matplotlib are accessible through Sage functions. You can also import the matplotlib package into Sage, and use all of its features directly. This is very powerful, but it's also confusing, because there's more than one way to do the same thing. To further add to the confusion, matplotlib has two interfaces: the command-oriented Pyplot interface and an object-oriented interface. The examples in this chapter will attempt to clarify which interface is being used. Plotting in two dimensions Two-dimensional plots are probably the most important tool for visually presenting information in math, science, and engineering. Sage has a wide variety of tools for making many types of 2D plots. Plotting symbolic expressions with Sage We will start by exploring the plotting functions that are built in to Sage. They are generally less flexible than using matplotlib directly, but also tend to be easier to use. Time for action – plotting symbolic expressions Let's plot some simple functions. Enter the following code: p1 = plot(sin, (-2*pi, 2*pi), thickness=2.0, rgbcolor=(0.5, 1, 0), legend_label='sin(x)') p2 = plot(cos, (-2*pi, 2*pi), thickness=3.0, color='purple', alpha=0.5, legend_label='cos(x)') plt = p1 + p2 plt.axes_labels(['x', 'f(x)']) show(plt) If you run the code from the interactive shell, the plot will open in a separate window. If you run it from the notebook interface, the plot will appear below the input cell. In either case, the result should look like this: What just happened? This example demonstrated the most basic type of plotting in Sage. The plot function requires the following arguments: graphics_object = plot(callable symbolic expression, (independent_var, ind_var_min, ind_var_max)) The first argument is a callable symbolic expression, and the second argument is a tuple consisting of the independent variable, the lower limit of the domain, and the upper limit. If there is no ambiguity, you do not need to specify the independent variable. Sage automatically selects the right number of points to make a nice curve in the specified domain. The plot function returns a graphics object. To combine two graphics objects in the same image, use the + operator: plt = p1 + p2. Graphics objects have additional methods for modifying the final image. In this case, we used the axes_labels method to label the x and y axes. Finally, the show function was used to finish the calculation and display the image. The plot function accepts optional arguments that can be used to customize the appearance and format of the plot. To see a list of all the options and their default values, type: sage: plot.options {'fillalpha': 0.5, 'detect_poles': False, 'plot_points': 200, 'thickness': 1, 'alpha': 1, 'adaptive_tolerance': 0.01, 'fillcolor': 'automatic', 'adaptive_recursion': 5, 'exclude': None, 'legend_label': None, 'rgbcolor': (0, 0, 1), 'fill': False} Here is a summary of the options for customizing the appearance of a plot: KeywordDescriptionalphaTransparency of the line (0=opaque, 1=transparent)fillTrue to fill area below the linefillalphaTransparency of the filled-in area (0=opaque, 1=transparent)fillcolorColor of the filled-in areargbcolorColor of the line Sage uses an algorithm to determine the best number of points to use for the plot, and how to distribute them on the x axis. The algorithm uses recursion to add more points to resolve regions where the function changes rapidly. Here are the options that control how the plot is generated: KeywordDescriptionadaptive_recursionMax depth of recursion when resolving areas of the plot where the function changes rapidlyadaptive_toleranceTolerance for stopping recursiondetect_polesDetect points where function value approaches infinity (see next example)excludeA list or tuple of points to exclude from the plotplot_pointsNumber of points to use in the plot Specifying colors in Sage There are several ways to specify a color in Sage. For basic colors, you can use a string containing the name of the color, such as red or blue. You can also use a tuple of three floating-point values between 0 and 1.0. The first value is the amount of red, the second is the amount of green, and the third is the amount of blue. For example, the tuple (0.5, 0.0, 0.5) represents a medium purple color. Some functions "blow up" to plus or minus infinity at a certain point. A simplistic plotting algorithm will have trouble plotting these points, but Sage adapts. Time for action – plotting a function with a pole Let's try to plot a simple function that takes on infinite values within the domain of the plot: pole_plot = plot(1 / (x - 1), (0.8, 1.2), detect_poles='show', marker='.') print("min y = {0} max y = {1}".format(pole_plot.ymax(), pole_plot.ymin())) pole_plot.ymax(100.0) pole_plot.ymin(-100.0) # Use TeX to make nicer labels pole_plot.axes_labels([r'$x$', r'$1/(x-1)$']) pole_plot.show() The output from this code is as follows: What just happened? We did a few things differently compared to the previous example. We defined a callable symbolic expression right in the plot function. We also used the option detect_poles='show' to plot a dashed vertical line at the x value where the function returns infinite values. The option marker='.' tells Sage to use a small dot to mark the individual (x,y) values on the graph. In this case, the dots are so close together that they look like a fat line. We also used the methods ymin and ymax to get and set the minimum and maximum values of the vertical axis. When called without arguments, these methods return the current values. When given an argument, they set the minimum and maximum values of the vertical axis. Finally, we labeled the axes with nicely typeset mathematical expressions. As in the previous example, we used the method axes_labels to set the labels on the x and y axes. However, we did two special things with the label strings: r'$frac{1}{(x-1)}$' The letter r is placed in front of the string, which tells Python that this is a raw string. When processing a raw string, Python does not interpret backslash characters as commands (such as interpreting n as a newline). Note that the first and last characters of the string are dollar signs, which tells Sage that the strings contain mark-up that needs to be processed before being displayed. The mark-up language is a subset of TeX, which is widely used for typesetting complicated mathematical expressions. Sage performs this processing with a built-in interpreter, so you don't need to have TeX installed to take advantage of typeset labels. It's a good idea to use raw strings to hold TeX markup because TeX uses a lot of backslashes. To learn about the typesetting language, see the matplotlib documentation at: http://matplotlib.sourceforge.net/users/mathtext.html Time for action – plotting a parametric function Some functions are defined in terms of a parameter. Sage can easily plot parametric functions: var('t') pp = parametric_plot((cos(t), sin(t)), (t, 0, 2*pi), fill=True, fillcolor='blue') pp.show(aspect_ratio=1, figsize=(3, 3), frame=True) The output from this code is as follows: What just happened? We used two parametric functions to plot a circle. This is a convenient place to demonstrate the fill option, which fills in the space between the function and the horizontal axis. The fillcolor option tells Sage which color to use for the fill, and the color can be specified in the usual ways. We also demonstrated some useful options for the show method (these options also work with the show function). The option aspect_ratio=1 forces the x and y axes to use the same scale. In other words, one unit on the x axis takes up the same number of pixels on the screen as one unit on the y axis. Try changing the aspect ratio to 0.5 and 2.0, and see how the circle looks. The option figsize=(x_size,y_size) specifies the aspect ratio and relative size of the figure. The units for the figure size are relative, and don't correspond to an absolute unit like inches or centimetres. The option frame=True places a frame with tick marks around the outside of the plot. Time for action – making a polar plot Some functions are more easily described in terms of angle and radius. The angle is the independent variable, and the radius at that angle is the dependent variable. Polar plots are widely used in electrical engineering to describe the radiation pattern of an antenna. Some antennas are designed to transmit (or receive) electromagnetic radiation in a very narrow beam. The beam shape is known as the radiation pattern. One way to achieve a narrow beam is to use an array of simple dipole antennas, and carefully control the phase of the signal fed to each antenna. In the following example, we will consider seven short dipole antennas set in a straight line: # A linear broadside array of short vertical dipoles # located along the z axis with 1/2 wavelength spacing var('r, theta') N = 7 normalized_element_pattern = sin(theta) array_factor = 1 / N * sin(N * pi / 2 * cos(theta)) / sin(pi / 2 * cos(theta)) array_plot = polar_plot(abs(array_factor), (theta, 0, pi), color='red', legend_label='Array') radiation_plot = polar_plot(abs(normalized_element_pattern * array_factor), (theta, 0, pi), color='blue', legend_label='Radiation') combined_plot = array_plot + radiation_plot combined_plot.xmin(-0.25) combined_plot.xmax(0.25) combined_plot.set_legend_options(loc=(0.5, 0.3)) show(combined_plot, figsize=(2, 5), aspect_ratio=1) Execute the code. You should get a plot like this: What just happened? We plotted a polar function, and used several of the plotting features that we've already discussed. There are two subtle points worth mentioning. The function array_factor is a function of two variables, N and theta. In this example, N is more like a parameter, while theta is the independent variable we want to use for plotting. We use the syntax (theta, 0, pi) in the plot function to indicate that theta is the independent variable. The second new aspect of this example is that we used the methods xmin and xmax to set the limits of the x axis for the graphics object called combined_plot. We also used the set_legend_options of the graphics object to adjust the position of the legend to avoid covering up important details of the plot. Time for action – plotting a vector field Vector fields are used to represent force fields such as electromagnetic fields, and are used to visualize the solutions of differential equations. Sage has a special plotting function to visualize vector fields. var('x, y') a = plot_vector_field((x, y), (x, -3, 3), (y, -3, 3), color='blue') b = plot_vector_field((y, -x), (x, -3, 3), (y, -3, 3), color='red') show(a + b, aspect_ratio=1, figsize=(4, 4)) You should get the following image: What just happened? The plot_vector_field function uses the following syntax: plot_vector_field((x_function,y_function), (x,x_min,x_max), (y,y_ min,y_max)) The keyword argument color specifies the color of the vectors. Plotting data in Sage So far, we've been making graphs of functions. We specify the function and the domain, and Sage automatically chooses the points to make a nice-looking curve. Sometimes, we need to plot discrete data points that represent experimental measurements or simulation results. The following functions are used for plotting defined sets of points. Time for action – making a scatter plot Scatter plots are used in science and engineering to look for correlation between two variables. A cloud of points that is roughly circular indicates that the two variables are independent, while a more elliptical arrangement indicates that there may be a relationship between them. In the following example, the x and y coordinates are contrived to make a nice plot. In real life, the x and y coordinates would typically be read in from data files. Enter the following code: def noisy_line(m, b, x): return m * x + b + 0.5 * (random() - 0.5) slope = 1.0 intercept = -0.5 x_coords = [random() for t in range(50)] y_coords = [noisy_line(slope, intercept, x) for x in x_coords] sp = scatter_plot(zip(x_coords, y_coords)) sp += line([(0.0, intercept), (1.0, slope+intercept)], color='red') sp.show() The result should look similar to this plot. Note that your results won't match exactly, since the point positions are determined randomly. What just happened? We created a list of randomized x coordinates using the built-in random function. This function returns a random number in the range 0 <= x < 1. We defined a function called noisy_line that we then used to create a list of randomized y coordinates with a linear relationship to the x coordinates. We now have a list of x coordinates and a list of y coordinates, but the scatter_plot function needs a list of (x,y) tuples. The zip function takes the two lists and combines them into a single list of tuples. The scatter_plot function returns a graphics object called sp. To add a line object to the plot, we use the following syntax: sp += line([(x1, y1), (x2,y2)], color='red') The += operator is a way to increment a variable; x+=1 is a shortcut for x = x + 1. Because the + operator also combines graphics objects, this syntax can be used to add a graphics object to an existing graphics object. Time for action – plotting a list Sometimes, you need to plot a list of discrete data points. The following example might be found in an introductory digital signal processing (DSP) course. We will use lists to represent digital signals. We sample the analogue function cosine(t) at two different sampling rates, and plot the resulting digital signals. # Use list_plot to visualize digital signals # Undersampling and oversampling a cosine signal sample_times_1 = srange(0, 6*pi, 4*pi/5) sample_times_2 = srange(0, 6*pi, pi/3) data1 = [cos(t) for t in sample_times_1] data2 = [cos(t) for t in sample_times_2] plot1 = list_plot(zip(sample_times_1, data1), color='blue') plot1.axes_range(0, 18, -1, 1) plot1 += text("Undersampled", (9, 1.1), color='blue', fontsize=12) plot2 = list_plot(zip(sample_times_2, data2), color='red') plot2.axes_range(0, 18, -1, 1) plot2 += text("Oversampled", (9, 1.1), color='red', fontsize=12) g = graphics_array([plot1, plot2], 2, 1) # 2 rows, 1 column g.show(gridlines=["minor", False]) The result is as follows: What just happened? The function list_plot works a lot like scatter_plot from the previous example, so I won't explain it again. We used the method axes_range(x_min, x_max, y_min, y_max) to set the limits of the x and y axes all at once. Once again, we used the += operator to add a graphics object to an existing object. This time, we added a text annotation instead of a line. The basic syntax for adding text at a given (x,y) position is text('a string', (x,y)). To see the options that text accepts, type the following: sage: text.options {'vertical_alignment': 'center', 'fontsize': 10, 'rgbcolor': (0, 0, 1), 'horizontal_alignment': 'center', 'axis_coords': False} To display the two plots, we introduced a new function called graphics_array, which uses the basic syntax: graphics_array([plot_1, plot_2, ..., plot_n], num_rows, num_columns) This function returns another graphics object, and we used the show method to display the plots. We used the keyword argument gridlines=["minor", False] to tell Sage to display vertical lines at each of the minor ticks on the x axis. The first item in the list specifies vertical grid lines, and the second specifies horizontal grid lines. The following options can be used for either element: "major"Grid lines at major ticks"minor"Grid lines at major and minor ticksFalseNo grid lines Try playing with these options in the previous example.  
Read more
  • 0
  • 0
  • 25149
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-can-you-do-sage-math
Packt
02 May 2011
5 min read
Save for later

What can you do with SageMath?

Packt
02 May 2011
5 min read
Getting started with the basics of SageMath You don't have to install Sage to try it out! In this article, we will use the notebook interface to showcase some of the basics of Sage so that you can follow along using a public notebook server. These examples can also be run from an interactive session if you have installed Sage. Go to http://www.sagenb.org and sign up for a free account. You can also browse worksheets created and shared by others. The notebook interface should look like this: Create a new worksheet by clicking on the link called New Worksheet: Type in a name when prompted, and click Rename. The new worksheet will look like this: Enter an expression by clicking in an input cell and typing or pasting in an expression: Click the evaluate link or press Shift-Enter to evaluate the contents of the cell. A new input cell will automatically open below the results of the calculation. You can also create a new input cell by clicking in the blank space just above an existing input cell. Using Sage as a powerful calculator Sage has all the features of a scientific calculator—and more. If you have been trying to perform mathematical calculations with a spreadsheet or the built-in calculator in your operating system, it's time to upgrade. Sage offers all the built-in functions you would expect. Here are a few examples: If you have to make a calculation repeatedly, you can define a function and variables to make your life easier. For example, let's say that you need to calculate the Reynolds number, which is used in fluid mechanics: You can define a function and variables like this: Re(velocity, length, kinematic_viscosity) = velocity * length / kinematic_viscosity v = 0.01 L = 1e-3 nu = 1e-6 Re(v, L, nu) When you type the code into an input cell and evaluate the cell, your screen will look like this: Now, you can change the value of one or more variables and re-run the calculation: Sage can also perform exact calculations with integers and rational numbers. Using the pre-defined constant pi will result in exact values from trigonometric operations. Sage will even utilize complex numbers when needed. Here are some examples: Symbolic mathematics Much of the difficulty of higher mathematics actually lies in the extensive algebraic manipulations that are required to obtain a result. Sage can save you many hours, and many sheets of paper, by automating some tedious tasks in mathematics. We'll start with basic calculus. For example, let's compute the derivative of the following equation: The following code defines the equation and computes the derivative: var('x') f(x) = (x^2 - 1) / (x^4 + 1) show(f) show(derivative(f, x)) The results will look like this: The first line defines a symbolic variable x (Sage automatically assumes that x is always a symbolic variable, but we will define it in each example for clarity). We then defined a function as a quotient of polynomials. Taking the derivative of f(x) would normally require the use of the quotient rule, which can be very tedious to calculate. Sage computes the derivative effortlessly. Now, we'll move on to integration, which can be one of the most daunting tasks in calculus. Let's compute the following indefinite integral symbolically: The code to compute the integral is very simple: f(x) = e^x * cos(x) f_int(x) = integrate(f, x) show(f_int) The result is as follows: To perform this integration by hand, integration by parts would have to be done twice, which could be quite time consuming. If we want to better understand the function we just defined, we can graph it with the following code: f(x) = e^x * cos(x) plot(f, (x, -2, 8)) Sage will produce the following plot: Sage can also compute definite integrals symbolically: To compute a definite integral, we simply have to tell Sage the limits of integration: f(x) = sqrt(1 - x^2) f_integral = integrate(f, (x, 0, 1)) show(f_integral) The result is: This would have required the use of a substitution if computed by hand. Have a go hero There is actually a clever way to evaluate the integral from the previous problem without doing any calculus. If it isn't immediately apparent, plot the function f(x) from 0 to 1 and see if you recognize it. Note that the aspect ratio of the plot may not be square. The partial fraction decomposition is another technique that Sage can do a lot faster than you. The solution to the following example covers two full pages in a calculus textbook —assuming that you don't make any mistakes in the algebra! f(x) = (3 * x^4 + 4 * x^3 + 16 * x^2 + 20 * x + 9) / ((x + 2) * (x^2 + 3)^2) g(x) = f.partial_fraction(x) show(g) The result is as follows: We'll use partial fractions again when we talk about solving ordinary differential equations symbolically. Linear algebra   Linear algebra is one of the most fundamental tasks in numerical computing. Sage has many facilities for performing linear algebra, both numerical and symbolic. One fundamental operation is solving a system of linear equations:   Although this is a tedious problem to solve by hand, it only requires a few lines of code in Sage: A = Matrix(QQ, [[0, -1, -1, 1], [1, 1, 1, 1], [2, 4, 1, -2], [3, 1, -2, 2]]) B = vector([0, 6, -1, 3]) A.solve_right(B) The answer is as follows: Notice that Sage provided an exact answer with integer values. When we created matrix A, the argument QQ specified that the matrix was to contain rational values. Therefore, the result contains only rational values (which all happen to be integers for this problem).  
Read more
  • 0
  • 0
  • 18636

article-image-oracle-siebel-crm-8-user-properties-specialized-application-logic
Packt
28 Apr 2011
5 min read
Save for later

Oracle Siebel CRM 8: User Properties for Specialized Application Logic

Packt
28 Apr 2011
5 min read
Understanding user properties User properties are child object types which are available for the following object types in the Siebel Repository: Applet, Control, List Column Application Business Service Business Component, Field Integration Object, Integration Component, Integration Component Field View To view the User Property (or User Prop as it is sometimes abbreviated) object type we typically have to modify the list of displayed types for the Object Explorer window. This can be achieved by selecting the Options command in the View menu. In the Object Explorer tab of the Development Tools Options dialog, we can select the object types for display as shown in the following screenshot: In the preceding example, the Business Component User Prop type is enabled for display. After confirming the changes in the Development Tools Options dialog by clicking the OK button, we can for example navigate to the Account business component and review its existing user properties by selecting the Business Component User Prop type in the Object Explorer. The following screenshot shows the list of user properties for the Account business component: The screenshot also shows the standard Properties window on the right. This is to illustrate that a list of user properties, which mainly define a Name/Value pair, can be simply understood as an extension to an object type's usual properties which are accessible by means of the Properties window and represent Name/Value pairs as well.Because an additional user property is just a new record in the Siebel Repository, the list of user properties for a given parent record is theoretically infinite. This allows developers to define a rich set of business logic as a simple list of Name/Value pairs instead of having to write program code.The Name property of a user property definition must use a reserved name - and optional sequence number - as defined by Oracle engineering. The Value property must also follow the syntax defined for the special purpose of the user property. Because an additional user property is just a new record in the Siebel Repository, the list of user properties for a given parent record is theoretically infinite. This allows developers to define a rich set of business logic as a simple list of Name/Value pairs instead of having to write program code. The Name property of a user property definition must use a reserved name—and optional sequence number—as defined by Oracle engineering. The Value property must also follow the syntax defined for the special purpose of the user property. Did you know? The list of available names for a user property depends on the object type (for example Business Component) and the C++ class associated with the object definition. For example, the business component Account is associated with the CSSBCAccountSIS class which defines a different range of available user property names than other classes. Many user property names are officially documented in the Siebel Developer's Reference guide in the Siebel Bookshelf. We can find the guide online at the following URL: http://download.oracle.com/docs/cd/E14004_01/books/ToolsDevRef/ToolsDevRef_UserProps.html The user property names described in this guide are intended for use by custom developers. Any other user property which we may find in the Siebel Repository but which is not officially documented should be considered an internal user property of Oracle engineering. Because the internal user properties could change in a future version of Siebel CRM in both syntax and behavior without prior notice, it is highly recommended to use only user properties which are documented by Oracle. Another way to find out which user property names are made available by Oracle to customers is to click the dropdown icon in the Name property of a user property record. This opens the user property pick list which displays a wide range of officially documented user properties along with a description text. Multi-instance user properties Some user properties can be instantiated more than once. If this is the case a sequence number is used to generate a distinguished name. For example, the On Field Update Set user property used on business components uses a naming convention as displayed in the following screenshot: In the previous example, we can see four instances of the On Field Update Set user property distinguished by a sequential numeric suffix (1 to 4). Because it is very likely that Oracle engineers and custom developers add additional instances of the same user property while working on the next release, Oracle provides a customer allowance gap of nine instances for the next sequence number. In the previous example, a custom developer could continue the set of On Field Update Set user properties with a suffix of 13. By doing so, the custom developer will most likely avoid conflicts during an upgrade to a newer version of Siebel CRM. The Oracle engineer would continue with a suffix of five and upgrade conflicts will only occur when Oracle defines more than eight additional instances. The gap of nine also ensures that the sequence of multi-instance user properties is still functional when one or more of the user property records are marked as inactive. In the following sections, we will describe the most important user properties for the business and user interface layer. In addition, we will examine case study scenarios to identify best practices for using user properties to define specialized behavior of Siebel CRM applications. Business component and field user properties On the business layer of the Siebel Repository, user properties are widely used to control specialized behavior of business components and fields. The following table describes the most important user properties on the business component level. The Multiple Instances column contains Yes for all user properties which can be instantiated more than once per parent object: Source: Siebel Developer's Reference, Version 8.1: http://download.oracle.com/docs/cd/E14004_01/books/ToolsDevRef/booktitle.html
Read more
  • 0
  • 0
  • 5753

article-image-ibm-rational-clearcase-challenges-java-development
Packt
27 Apr 2011
9 min read
Save for later

IBM Rational ClearCase: Challenges in Java Development

Packt
27 Apr 2011
9 min read
  IBM Rational ClearCase 7.0: Master the Tools That Monitor, Analyze, and Manage Software Configurations Take a deep dive into extending ClearCase 7.0 to ensure the consistency and reproducibility of your software configurations         Read more about this book       (For more resources on IBM, see here.) Java ClearCase was mostly (at least originally) written in C++, and its build model is well suited (with some historical adjustments to cope with templates in two generations of compilers) to development using this language. Java, although already old at the time of its wide success, broke a few significant assumptions of the model. Problems with the Java build process The traditional build model is stateless, and therefore easily reproducible: running the same build command in the context of static sources (leaves of the dependency tree, seen upwards from the products) produces the same results, but doesn't alter the context. This is not the case anymore with javac. The reason is trivial: javac integrates into the compiler a build tool function. The compiler reads the Java source as a build script and uses the information to build a list of dependencies, which it verifies first, using traditional time stamp comparison between the sources and class files produced, and rebuilding missing or out-dated class files. It doesn't, however, perform a thorough recursive analysis, nor attempt to validate jars, for instance. This behavior is highly problematic from a clearmake point of view, as it results in the list of derived objects produced with a given rule (siblings of the target) being variable from one invocation to the next, and conversely, in a given derived object potentially being produced by several different rules. Both of these effects result in incorrect dependency analysis, and in spurious invalidation of previous build results. Let's note that since javac performs time stamp comparisons, the default behavior of cleartool to set the timestamp at checkin time is inadequate for Java sources, and results in needlessly invalidating classes produced before checkin: set up a special element type manager defaulting to the -ptime (preserve time) checkin option. The second traditional assumption broken by Java is a practical one: the language has been designed to optimize compilation speed, which means that build time stops being a primary issue. This is of course obtained by using a single target, the Java virtual machine, and at the expense of run-time performance; but history has already clearly validated this choice, in the context of favorable progresses in hardware. This is obviously not a problem in itself, but it had two clear consequences: The winkin behavior of clearmake cannot be sold to users anymore on the argument of its saving build time (by side-effect). As we know, the argument of the accuracy of management appeals only to users having experienced its importance, and reward following investment is a known recipe for failure in evolution. It encourages carelessness among developers: throw away (clean) and start again from scratch. Of course, the gain in speed is mostly felt in small configurations, and at the beginning of projects: this strategy doesn't scale, as the total build time still depends on the overall size of the component, instead of on this of the increment (the number of modified files). It is however often late to change one's strategy when the slowness becomes noticeable. .JAVAC support in clearmake Support for Java was added relatively late to clearmake (with version 2003.06.00), in terms of a .JAVAC special target (and a javaclasses makefile macro). The idea (to which your authors contributed) was to use the build audit to produce a .dep file for every class, which would be considered by clearmake in the next invocation, thus giving it a chance to preempt the javac dependency analysis. Of course, the dependency tree would only be as good as this of the previous compile phase, but it would get refined at every step, eventually converging towards one which would satisfy even the demanding catcr -union -check. Special care was needed to handle: Inner classes (producing several class files per java source, some of them being prefixed with the name of the enclosing class, with a dollar sign as separator—not a friendly choice for Unix shells) . Cycles, that is, circular references among a set of classes: a situation which clearmake could only process by considering all the set as a common target with multiple siblings. This solution should be very satisfactory, from the point of view of ensuring correctness (consistency of the versions used), sharing of objects produced, and thus managing by differences. It should offer scalability of performance, and therefore present a breakeven point after which it would compete favorably with from scratch building strategies. One might add that a makefile-based system is likely to integrate with systems building components written in other languages (such as C/C++), as well as with performing other tasks than compiling Java code. Let us demonstrate how the dependency analysis and derived objects reuse are working using the .JAVAC target in the makefiles, testing exactly the aspects mentioned above—inner classes and cycles. In our small example, Main.java implements the main class, FStack.java implements another independent class, which the Main class is using. Finally, the FStack class also contains an inner class Enumerator, which results after the compilation in a file of name FStack$Enumerator.class: # Main.java public class Main { public static void main(String args[]) { FStack s = new FStack(2); s.push("foo"); s.push("bar"); } }; # FStack.java public class FStack { Object array[]; int top = 0; FStack(int fixedSizeLimit) { array = new Object[fixedSizeLimit]; } public void push(Object item) { array[top++] = item; } public boolean isEmpty() { return top == 0; } public class Enumerator implements java.util.Enumeration { int count = top; public boolean hasMoreElements() { return count > 0; } public Object nextElement() { return array[--count]; } } public java.util.Enumeration elements() { return new Enumerator(); } } We create a tiny Makefile making use of the .JAVAC target. Note that we do not have to describe any dependencies manually; we just mention the main target Main.class, leaving the rest to the javac and the ClearCase Java build auditing: # Makefile .JAVAC: .SUFFIXES: .java .class .java.class: rm -f $@ $(JAVAC) $(JFLAGS) $< all: /vob/jbuild/Main.class The first run of the clearmake does not look very spectacular: it just executes the javac compiler, submitting the Main.java source to it, and all the three class files (FStack.class, FStack$Enumerator.class, and Main.class) get generated. The same would have been produced if we used the "default" Makefile (the same, but without the .JAVAC target): $ clearmake -f Makefile rm -f /vob/jbuild/Main.class /usr/bin/javac /vob/jbuild/Main.java Note though that one thing looks different from the default Makefile execution: our ".JAVAC" Makefile produces the following dependency (.dep) files: $ ll *.dep -rw-r--r-- 1 joe jgroup 654 Oct 19 14:45 FStack.class.dep -rw-r--r-- 1 joe jgroup 514 Oct 19 14:45 Main.class.dep But their contents are somewhat puzzling at the moment: $ cat FStack.class.dep <!-- FStack.class.dep generated by clearmake, DO NOT EDIT. --> <version value=1 /> <!-- (A build of this target has not been directly audited.) --> <mytarget name=/vob/jbuild/FStack.class conservative=true /> <mysource path=/vob/jbuild/FStack.java /> <!-- Target /vob/jbuild/FStack.class depends upon the following ######### classes: --> <target name=/vob/jbuild/Main.class path=/vob/jbuild/Main.class /> <cotarget name=/vob/jbuild/FStack.class path=/vob/jbuild/ ############### FStack$Enumerator.class inner=true /> $ cat Main.class.dep <!-- Main.class.dep generated by clearmake, DO NOT EDIT. --> <version value=1 /> <!-- (A build of this target has been directly audited.) --> <mytarget name=/vob/jbuild/Main.class conservative=false /> <mysource path=/vob/jbuild/Main.java /> <!-- Target /vob/jbuild/Main.class depends upon the following ########### classes: --> <target name=/vob/jbuild/FStack.class path=/vob/jbuild/ ################# FStack.class precotarget=false /> So, it looks as if the FStack class was depending on the Main class, and the other way around as well. But that's what one can only figure out after a single javac execution—The Main class was produced and, in order to compile it, two more classes were needed: FStack and FStack$Enumerator. But we can do better. Let's try the second subsequent clearmake execution, without any real changes (for our purpose: in a real work scenario, a new build would of course be motivated by a need to test some changes). It does not yield all is up to date, as one would expect when using the default Makefile, but instead it does something interesting: $ clearmake -f Makefile rm -f /vob/jbuild/FStack.class /usr/bin/javac /vob/jbuild/FStack.java rm -f /vob/jbuild/Main.class /usr/bin/javac /vob/jbuild/Main.java Note that it does not even execute the default script, but rather some other one (/usr/bin/javac /vob/jbuild/FStack.java). Where did it come from? Actually from the FStack.class.dep dependency file mentioned above. And what about the dependency files themselves?-They have somewhat changed: $ cat FStack.class.dep <!-- FStack.class.dep generated by clearmake, DO NOT EDIT. --> <version value=1 /> <!-- (A build of this target has been directly audited.) --> <mytarget name=/vob/jbuild/FStack.class conservative=false /> <mysource path=/vob/jbuild/FStack.java /> <!-- Target /vob/jbuild/FStack.class depends upon the following ######### classes: --> <cotarget name=/vob/jbuild/FStack.class path=/vob/jbuild/ ############### FStack$Enumerator.class inner=true /> $ cat Main.class.dep <!-- Main.class.dep generated by clearmake, DO NOT EDIT. --> <version value=1 /> <!-- (A build of this target has been directly audited.) --> <mytarget name=/vob/jbuild/Main.class conservative=false /> <mysource path=/vob/jbuild/Main.java /> <!-- Target /vob/jbuild/Main.class depends upon the following ########### classes: --> <target name=/vob/jbuild/FStack.class path=/vob/jbuild/ ################# FStack.class precotarget=false /> And now this looks right! The FStack class depends on FStack$Enumerator, but it does not depend on the Main class, and this is noted in the modified FStack.class.dep. The Main class, on the other hand, does depend on FStack, and that is stated correctly in Main.class.dep. Now, if we try to run clearmake once again, it yields 'all' is up to date: $ clearmake -f Makefile 'all' is up to date. But this time it means that all the dependencies have been analyzed and recorded in the dep files.
Read more
  • 0
  • 0
  • 1771

article-image-oracle-siebel-crm-8-configuring-navigation
Packt
26 Apr 2011
6 min read
Save for later

Oracle Siebel CRM 8: Configuring Navigation

Packt
26 Apr 2011
6 min read
Oracle Siebel CRM 8 Developer's Handbook Understanding drilldown objects In Siebel CRM, a drilldown is the activity of clicking on a hyperlink, which typically leads to a more detailed view of the record where the hyperlink originated. The standard Siebel CRM applications provide many examples for drilldown objects, which can mainly be found on list applets such as in the following screenshot that shows the Opportunity List Applet: The Opportunity List Applet allows the end user to click on the opportunity name or the account name. Clicking on the Opportunity Name navigates to the Opportunity Detail - Contacts View in the same screen while clicking on the Account name navigates to the Account Detail - Contacts View on the Accounts screen. Siebel CRM supports both static and dynamic drilldown destinations. The Opportunity List Applet (in Siebel Industry Applications) defines dynamic drilldown destinations for the opportunity name column depending on the name of the product line associated with the opportunity. We can investigate this behavior by creating a test opportunity record and setting its Product Line field (in the More Info view) to Equity. When we now drill down on the Opportunity Name, we observe that the FINCORP Deal Equity View is the new navigation target, allowing the end user to provide detailed equity information for the opportunity. To test this behavior, we must use the Siebel Sample Database for Siebel Industry Applications (SIA) and log in as SADMIN. We can now inspect the Opportunity List Applet in Siebel Tools. Every applet that provides drilldown functionality has at least one definition for the Drilldown Object child type. To view the Drilldown Object definitions for the Opportunity List Applet we can follow the following procedure: Navigate to the Opportunity List Applet. In the Object Explorer, expand the Applet type and select the Drilldown Objects type. Inspect the list of Drilldown Object Definitions. The following screenshot shows the drilldown object definitions for the Opportunity List Applet: We can observe that a drilldown object defines a Hyperlink Field and a (target) View. These and other properties of drilldown objects are described in more detail later in this section. There are various instances of drilldown objects visible in the previous screenshot that reference the Name field. One instance—named Line of Business defines dynamic drilldown destinations that can be verified by expanding the Drilldown Object type in the Object Explorer and selecting the Dynamic Drilldown Destination type (with the Line of Business drilldown object selected). The following screenshot shows the dynamic drilldown destination child object definitions for the Line of Business drilldown object: The child list has been filtered to show only active records and the list is sorted by the Sequence property. Dynamic Drilldown Destinations define a Field of the applet's underlying business component and a Value. The Siebel application verifies the Field and Value for the current record and—if a matching dynamic drilldown destination record is found—uses the Destination Drilldown Object to determine the target view for the navigation. When no match is found, the view in the parent drilldown object is used for navigation. When we investigate the drilldown object named Primary Account, we learn that it defines a Source Field and a target business component, which is a necessity when the drilldown's target View uses a different business object than the View in which the applet is situated. In order to enable the Siebel application to retrieve the record in the target View, a source field that carries the ROW_ID of the target record and the business component to query must be specified. The following table describes the most important properties of the Drilldown Object type: The following table describes the most important properties for the Dynamic Drilldown Destination type: Creating static drilldowns In the following section, we will learn how to create static drilldowns from list and form applets. Case study example: static drilldown from list applet The AHA Customer Documents List Applet (Download code - Ch:9), which provides a unified view for all quotes, orders, opportunities, and so on, associated with an account. The applet should provide drilldown capability to the documents and the employee details of the responsible person. In the following procedure, we describe how to create a static drilldown from the AHA Customer Documents List Applet to the Relationship Hierarchy View (Employee), which displays the reporting hierarchy and employee details: Navigate to the AHA Customer Documents List Applet. Check out or lock the applet if necessary. In the Object Explorer, expand the Applet type, and select the Drilldown Object type. In the Drilldown Objects list, create a new record and provide the following property values: Name: Responsible Employee Hyperlink Field: Responsible User Login Name View: Relationship Hierarchy View (Employee) Source Field: Responsible User Id Business Component: Employee Destination Field: Id Visibility Type: All Compile the AHA Customer Documents List Applet. We will continue to work on the AHA Customer Documents List Applet later in this article. Creating drilldown hyperlinks on form applets Sometimes it is necessary to provide a drilldown hyperlink on a form applet. The following procedure describes how to accomplish this using the SIS Account Entry Applet as an example. The applet will provide a hyperlink that allows quick navigation to the Account Detail - Activities View: Navigate to the Account business component. Check out or lock the business component if necessary. Add a new field with the following properties: Name: AHA Drilldown Field 1 Calculated: TRUE Calculated Value: "Drilldown 1" (include the parentheses) Compile the Account business component. Did you know? We should create a dummy field like in the previous example to avoid interference with standard fields when creating drilldowns on form applets. This field will be referenced in the drilldown object and control. Navigate to the SIS Account Entry Applet. Check out or lock the applet if necessary. In the Object Explorer, expand the Applet type and select the Drilldown Object type. Create a new entry in the Drilldown Objects list with the following properties: Name: AHA Activity Drilldown Hyperlink Field: AHA Drilldown Field 1 View: Account Detail - Activities View In the Object Explorer, select the Control type. In the Controls list, create a new record with the following properties: Name: AHA Activity Drilldown Caption: Go to Activities Field: AHA Drilldown Field 1 HTML Type: Link Method Invoked: Drilldown Right-click the SIS Account Entry Applet in the top list and select Edit Web Layout to open the layout editor. Drag the AHA Activities Drilldown control from the Controls | Columns window to the grid layout and drop it below the Zip Code text box. Save the changes and close the web layout editor. Compile the SIS Account Entry Applet. Log in to the Siebel client and navigate to the Account List view. Click on the Go to Activities link in the form applet and verify that the activities list is displayed for the selected account. The following screenshot shows the result of the previous configuration procedure in the Siebel Web Client: Clicking the Go to Activities hyperlink on the form applet will navigate the user to the activities list view for the current account.
Read more
  • 0
  • 0
  • 5282
article-image-open-text-metastorm-making-business-case
Packt
19 Apr 2011
8 min read
Save for later

Open Text Metastorm: Making a Business Case

Packt
19 Apr 2011
8 min read
  Open Text Metastorm ProVision® 6.2 Strategy Implementation Create and implement a successful business strategy for improved performance throughout the whole enterprise         Read more about this book       (For more resources on this subject, see here.) Recently, I had a meeting with the directors of a major household brand. They had just finished completing a strategic review of the business, which had been signed off by the board. They had numerous stakeholders to satisfy, and naturally the final result was a compromise, which was reached after months of meetings. I was with an experienced business coach who asked the CEO, "So, how will you know if you have succeeded?". The CEO had difficulty in answering the question. One of his senior managers in the room jumped to his rescue: "Oh, we are working out what we want to measure. We have agreed on some of the measures, but haven't yet put any numbers against them." The coach asked: "How do people in the organization feel about the new strategy?" The manager replied saying, "It's a good question. There are a lot of people who feel that it was a waste of time and they just want to get back to the real work. We have a team of very passionate and committed people, but they see strategy as navel gazing." We realized that the strategy was doomed. It seems logical to define the goals and then identify the measures. In practice, it is often easier to do it the other way round, even though that is counter-intuitive. First, define the measures of success and then articulate these measures as goals. The employees were right to be dubious. There are various ways to create, read, update, and delete information. These solutions fall into two major approaches—drawing and modeling. Drawing is still the approach adopted by many organizations because it is cheaper. Most companies that have considered using ProVision® have Microsoft Visio installed on every user desktop. As far as a project manager is concerned, Visio is a free resource. The project manager doesn't need to ask for funds, and if they do, then the manager will ask why they aren't using Visio. This article explains the limitations of drawing and why modeling is best practice and essential to support a sustainable strategy. The reader can use this information to make their business case compelling and get the funds that they need to do the job properly. Areas covered also include: The benefits of moving to a central repository. (What needs to be stored and how users can access it.) Are we building architecture or doing design? (The importance of language in getting your message across.) Better decisions now. (The purpose and notion of good enough architecture.) ProVision® and Metastorm BPM. (Is this Metastorm's unique selling point?) The benefits of moving to a central repository It is important to understand the difference between a drawing tool and a visual relational database such as ProVision®. The outputs can look identical. Because drawing tools are so much cheaper, why would you invest in ProVision®? There are a number of drawing packages available, Microsoft Visio is the key player in this market. It is a drawing package that is optimized for business diagrams. Modelers can select pre-built stencils to create models rapidly. As Visio is often installed already, it is the tool of choice of many business analysts. Designed to scale If all you need to do is visually represent an aspect of a business, then there is nothing wrong with Visio. However, its limitations soon become apparent. To understand the key differences, I need to explain some basic concepts about ProVision®. These concepts are the reason why you can manage hundreds of models in ProVision®. It has been designed to scale. The core concepts are: Object Link Model Notebook/File Repository Object ProVision® uses objects to represent business concepts. Everything you see—a role, system, process, or a goal is an object. Every object has a name and an icon. After that, it is up to you how much more detail you want to add. All objects can have associations with other objects. These associations will vary, according to the type of object. Link A link is a special type of object. It is displayed as a line that connects two objects. You can modify the way the line displays, that is, change its color, thickness, or arrow shape. Links appear in their own area in the object inventory. Some types of link can have a payload (a deliverable) and events associated with them. For example, the link between two activities is called a workflow link. This link can display the deliverable that is passed as an output from one activity to the other. It can also display the event that triggers the deliverable. Both deliverables and events are optional. In many cases, it is obvious what the deliverable or event would be. To distinguish between an event and a deliverable, ProVision® places the name of the event inside double quotation marks. In the example shown in the following figure, once the event named "Model ready to review" is triggered, the deliverable called Draft Model becomes the input for the Review Model activity: Model A model is a special kind of object that contains other objects. Every model must have an object that is designated as the subject of the model. ProVision® provides three types of model—hierarchical, non-hierarchical, and navigator. Hierarchical models typically allow only one object type to appear. For example, a Systems model can be used to create only the hierarchical relationships of systems. No other object type can be added. Even if you create a custom object, you will not be permitted to add it to a hierarchical model. Non-hierarchical models permit more than one object type to appear, so that you can show predefined relationships other than parent-child relationships between these objects. For example, a Systems Interaction model will allow you to demonstrate relationships between systems and hardware. The Navigator model is a special model that you can use when you are unable to express the relationships that you want using one of the other model types. You can use virtually any object type on a Navigator model. This is both its strength and weakness. As any object can be used, you can become overwhelmed by the choices. The Navigator model is the only model type that allows you to display models as well as objects. By default, these display as thumbnails of the full model. In the following example, a Navigator model shows a thumbnail of the Approval Process workflow model. The subject of the workflow model is the activity called Approval Process. It has been placed to the right to demonstrate the difference. The Navigator model has one more purpose. You can use it to visually express indirect relationships. For example, a goal may be realized by delivering a product or service to a customer group. The product requires processes. Each process decomposes down into a series of activities, some of which might require certain computer systems to be in a running state. There is no direct relationship between the goal and the computer system. There is an indirect relationship, which you can visualize using a Navigator model. Notebook and file Objects, links, and models are stored in Notebooks. If you think of a Notebook as a physical book, then the models are equivalent to chapters. Each model tells a story. Inside each story, there are words. Objects are equivalent to words. Links create phrases. Several Notebooks can be stored together. They share a common modeling language that changes the look and feel of objects, links, and models. Other than that, each Notebook is independent. So, it is possible to have two objects in different notebooks that have the same name and type but are completely different. Objects and models can be dragged from one Notebook to another. You can save a Notebook under a different name and thus use the first Notebook as a template for the second. A File is the logical equivalent of a Notebook. The main difference is that a File can be saved anywhere on a computer and then e-mailed or copied onto a memory stick for transfer. So, Files are used to share and exchange Notebooks with other users. Only one Repository can be open at any one time. Only one Notebook can be open within the Repository. In the example shown in the next diagram, the Sample Repository appears in bold writing to highlight that it is open. Also, the PackT Notebook has a different icon to distinguish that it is the Notebook being viewed. Repository All Notebooks are stored in Repositories. A Repository can contain many Notebooks (in practice most users would be unlikely to have more than 50, and typically around 10). The modeling language is associated with a Repository, and once it is made the default language, it changes the names, look and feel of all objects, links, and models, irrespective of which Notebook they are in. In the following example, the Sample Repository is bold to indicate that it is open. The POC Repository has a world icon to represent that it is stored on a remote server and accessed via Knowledge Exchange®. By contrast, local Repositories have a pyramid icon. Now that you have understood these basic concepts, let's see how ProVision® varies from a drawing package such as Visio.
Read more
  • 0
  • 0
  • 1275

article-image-openlayers-overview-vector-layer
Packt
11 Apr 2011
7 min read
Save for later

OpenLayers: Overview of Vector Layer

Packt
11 Apr 2011
7 min read
OpenLayers 2.10 Beginner's Guide Create, optimize, and deploy stunning cross-browser web maps with the OpenLayers JavaScript web mapping library What is the Vector Layer? OpenLayers' Vector Class is generally used to display data on top of a map and allow real time interaction with the data. What does this mean? Basically, it means we can load in data from geospatial files, such as KML or GeoJSON files, and display the contents on a map, styling the data however we see fit. For example, take a look at this map: This shows a map with a Google layer as underlying base layer and a vector layer on top of it. The data (all the circles with numbers in them) are loaded in from a GeoJSON file, an open file format that many other applications support. In the vector layer, there are a bunch of data points throughout the map. Each dot on the map is an object in the vector layer, and these objects are referred to as Features. In this case, each feature is actually a cluster of data points—the numbers in each circle represent how many points belong to that cluster. This clustering behavior is something we can use out of the box with OpenLayers via the Strategy class. Before we get to that point, let's talk about one of the main things that separate a vector layer from other layers. What makes the Vector Layer special? With a raster image, what you see is what you get. If you were to look at some close up satellite imagery on your map and see a bunch of buildings clustered together, you wouldn't necessarily know any additional information about those buildings. You might not even know they are buildings. Since raster layers are made up of images, it is up to the user to interpret what they see. This isn't necessarily a bad thing, but vector layers provide much more. With a vector layer, you can show the actual geometry of the building and attach additional information to it—such as the value of it, who owns it, its square footage, etc. It's easy to put a vector layer on top of your existing raster layers and create features in a specific location. The Vector Layer is client side Another fundamental difference is that the vector layer is, generally, used as a client side layer. This means that, usually, interaction with the actual vector data happens only on the client side. When you navigate around the map, for instance, the vector layer does not send a request to a server to get more information about the layer. Once you get the initial data, it's in your browser and you do not have to request the same data again. Since, in most cases, the vector data is loaded on the client side, interaction with the vector layer usually happens nearly instantaneously. However, there are some limitations. The vector layer is dependent on the user's browser and computer. While most browsers other than Internet Explorer have been progressing exceptionally well and are becoming more powerful each day, limitations do exist. Due to browser limitations, too many features in a vector layer will start to slow things down. There is no hard number on the amount of features, but generally anything over a couple hundred of features will start to slow things down on most computers. However, there are many ways around this, such as deleting features when you don't need them, and we'll talk about performance issues in more depth later. Other uses With the vector layer, we can display any type of geometrical object we'd like—points, lines, polygons, squares, makers...any shape you can imagine. We can use the vector layer to draw lines or polygons and then calculate the distance between them. We can draw shapes and then export the data using a variety of formats, then import that data in other programs, such as Google Earth. What is a 'Vector'? In terms of graphics, there are essentially two types of images: raster and vector. Most images you see are raster images—meaning, basically, they are comprised of a grid of pixels and their quality degrades as you zoom in on them. A photograph, for example, would be a raster image. If you enlarge it, it tends to get blurry or stretched out. The majority of image files—.jpegs, .png, .gifs, any bitmap image—are raster images. A vector, on the other hand, uses geometrical shapes based on math equations to form an image. Meaning that when you zoom in, the quality is preserved. If you were to zoom in on a vector image of a circle, the lines would always appear curved—with raster image, the lines would appear straight, as raster images are made up of a grid of colors. Vector graphics are not constrained to a grid, so they preserve shape at all scales. Time for Action – creating a Vector Layer Let's begin by creating a basic vector layer. In this example, after you add some points and other feature types to your vector layer, try to zoom in. You'll notice that the points you added don't lose quality as you zoom in. We'll go over how it works afterwards. We'll start off by using a basic WMS layer: var wms_layer = new OpenLayers.Layer.WMS( 'OpenLayers WMS', 'http://vmap0.tiles.osgeo.org/wms/vmap0', {layers: 'basic'}, {} ); Now, let's create the vector layer itself. We'll use the default projection and default values for the vector layer, so to create the layer all we need to do is create it: var vector_layer = new OpenLayers.Layer.Vector('Basic Vector Layer') Add the layers to the map now: map.addLayers([wms_layer, vector_layer]); If we looked at the map now, we would just see a simple map—our vector layer does not have any data loaded into it, nor do we have any controls to let us add vector data. Let's add the EditingToolbar control to the map, which allows us to add points and draw polygons on a vector layer. To do so, we just need to instantiate an object from OpenLayers.Control.EditingToolbar and pass in a vector layer. We'll pass in the vector_layer object we previously created: map.addControl(new OpenLayers.Control.EditingToolbar(vector_ layer)); Take a look at the map now. You should see the EditingToolbar control (which is basically a panel control with control buttons). Selecting different controls will allow you to place vector objects (called features) on the vector layer. Play around with the EditingToolbar control and place a few different points / polygons on the map: Now, one more step. You've placed some features (points / polygons / lines / etc.) on the map, but if you were to refresh the page they would disappear. We can, however, get the information about those features and then export it to a geospatial file. We'll work with files later, but for now let's grab the information about the features we've created. To access the information about the vector layer's features, all we need to do is access its features array. In Firebug, type and run the following: map.layers[1].features You should see a bunch of objects listed, each object is a feature you placed on the map: [Object { layer=Object, more...}, Object { layer=Object, more...}, Object { layer=Object, more...}, ...] Now, if you expand one of those objects, you'll get the information about a feature. The geometry property is an anonymous object each feature has which contains geometry information. You can also see the methods of the feature objects—try playing around with different functions. You can access the individual features by using map.layers[1].features[x], where x is the index of the feature in the features array. For instance, to destroy the first feature which we added to the map we could use: map.layers[1].features[0].destroy(); What Just Happened? We just demonstrated how to create a basic vector layer and added features to it using the EditingToolbar control. Using the features array of the vector layer, we also destroyed some features. As you've just seen, it's not terribly difficult to start using the vector layer— pretty easy, in fact.
Read more
  • 0
  • 0
  • 4087

article-image-scribus-importing-images
Packt
06 Apr 2011
13 min read
Save for later

Scribus: Importing Images

Packt
06 Apr 2011
13 min read
  Scribus 1.3.5: Beginner's Guide Create optimum page layouts for your documents using productive tools of Scribus.  Importing and exporting: The concepts To begin with, remember that there are two kinds of graphics you can add to your layout. You can have photos, generally taken from a digital camera, downloaded, or bought on some website. Photos will generally be stored in JPEG files, but you can also find PNG, TIFF, or many other file formats. The second kind of graphics is vector drawings such as logos and maps. They are computer-made drawings and are stored as EPS or SVG files. You will certainly need to work with both in most of your documents. The previous image shows the comparison between a photo on the left-hand side, and the same photo traced in vectors on the right-hand side. In the middle, see how the details of the photo are made up of square pixels that are sometimes difficult to handle to get good printing results. The first difference is that with photos, you will need to place them in a frame. There are some tips to automatically add the frame, but anyway, a frame will be needed. Vector drawings are imported as shapes and can be manipulated in Scribus like any other Scribus object. The second difference is that when working with photos, you will have to import them within Scribus. The term "import" is precise here. Most text processors don't import but insert images. In the case of "insert", the images are definitely stored in your document. So you can send this document to anyone via e-mail or can store it on any external storage device without caring about whether these images will still be present later. Scribus does it differently: it adds to the frame a reference to the imported file's own storage position. Scribus will look for the file each time it needs it, and if the file is not where it should be, problems may arise. Basically, the steps you go through while performing a DTP import and while performing a text processor insert are the same, but the global workflows are different because the professional needs to which they refer to are different. All the communication software, layout programs, website building software, as well as video editing software perform the same steps. Once you're used to it, it's really handy: it results in lighter and more adaptable files. However, while teaching Scribus, I have many times received mails from students who have trouble with the images in their documents, just because they didn't take enough care of this little difference. Remembering it will really help you to work peacefully. DTP versus text processors again Here we are talking about the default behavior of the software. As text processors can now work with frames, they often import the image as a link. Scribus itself should be able to embed pictures in its next release. Will the difference between these two pieces software one day disappear? The next difference is about the color space of the pictures. Cameras use RAW or JPEG (which are RGB documents). Offset printers, which everyone usually refers to, are based on the CMYK (Cyan, Magenta, Yellow, and Black) model. Traditionally, most proprietary packages ask the users to convert RGB files into CMYK. With Scribus there is no need to do so. You can import your graphics as RGB and Scribus will export them all as desired at the end of the process (which most of the time is exporting to PDF). So, if you use GIMP, you shouldn't be embarrassed by its default lack of CMYK support. CMYK in GIMP If you really want to import CMYK photos into Scribus, you can use the GIMP Separate plugin to convert an RGB document to a CMYK. Batch converting can be easily done using the convert utility of ImageMagick. Consider using the new CMYKTool software too, which gives some nice color test tools and options, including ink coverage. That said, we now have to see how it really works in Scribus. Importing photos To import photos, you simply have to: Create an Image Frame with the Insert Image Frame tool (I) or with a shape or polygon converted to an Image Frame. Go to File | Import | Get Image or use Ctrl + D. This is the most common way to do it. On some systems, you can even drag the picture from your folder into the frame. The thing that you should never do is copy the photo (wherever it is) and paste it into your frame. When doing this, Scribus has no knowledge of the initial picture's storage location (because the copied image is placed in a special buffer shared among the applications), and that's what it needs. There are other ways to import an image. You can use: The DirectImageImport script The Picture Browser on some very new and experimental Scribus versions The DirectImageImport script is placed in the Script | Scribus Scripts menu. It will display a window waiting for you to specify which image to import. Once you validate, Scribus automatically creates the frame and places the image within. Some people find it useful because it seems that you can save one step this way. But, in fact, the frame size is a default based on the page width, so you will have to customize it. You might have to draw it directly with a good size and it should be as easy. But anyway, you'll choose the one you prefer. Always remember that each image needs a frame. The image might be smaller or bigger than the frame. In the first case it won't fill it, and in the second case it will appear cropped. You can adapt the frame size as you want; the photo won't change at all. However, one thing you will certainly want to do is move the picture within the frame to select a better area. To do this, double-click on the frame to go into the Content Editing mode; you can now drag the picture inside to place it more precisely. The contextual menu of the Image Frame will give you two menus that will help to make the size of the image the same as that of the frame: Adjust Image to Frame will extend or reduce the photo so that it fills the frame but keeps the photo's height to width ratio Adjust Frame to Image will modify the frame size so that it fits the image In both cases, once applied, you can change the size of the frame as you need to and the photo will automatically be scaled to fit the frame size. Using such options is really interesting because it is fast and easy, but it is not without some risk. When a frame shape is changed after the image is imported, the image itself doesn't change. For example, if you use any of the skew buttons in the Node window (Edit button of the Shape tab of PP) you'll see that the frame will be skewed but not the picture itself. Image changes have to be made in an image manipulation program. Relinking photos Once your photos have been imported and they fit perfectly in your page content, you may wish to stop for a day or show this to some other people. It's always a good idea to make your decisions later, as you'll see more weaknesses a few days later. It is generally at this point that problems appear. If you send your Scribus .sla document for proofreading, the photos won't be displayed on the proofreader's screen as they are not embedded into the file. The same issue arises if you decide to move or rename some folders or photos on your own computer. Since Scribus just saves the location of your files and loads them when necessary, it won't be able to find them anymore. In this case, don't worry at all. Nothing is lost; it is just that Scribus has no idea of what you have done to your file—you will just need to tell it. At this point, you have two possibilities: Import the photos that have disappeared, again, so that Scribus can save their new location. Go to Extra | Manage Images. Show where the photos have gone by choosing the missing photo (shown with red diagonals crossing the image display area) and clicking on the Search button just below it. When relinking, you can link to another image if you wish; these images just need to have the same name to make Scribus guess it's the good one. So it will be very important that you give unique names to your photos. You can define some naming rules, or keep the camera automatic numbering. If you need to send your document to someone, you can either send a PDF that can't be modified, just annotated, or the whole Scribus document along with the photos. The best thing for this is not to compress the photo folder. It won't make the link into Scribus for they are absolute and the folder path name will differ on your reader's computer. They would need to relink every picture and that might take long for some documents. The best technique is to use the built-in File } Collect for Output window. It will ask you to create a new directory in which all the files needed for that document will be copied with relative paths, so that it will work everywhere. Compress this folder and send it. Time for action – creating a postcard As an example, let's create a postcard. It's an easy document that won't need many features and, therefore, we can concentrate on our image issues. We'll go through the dos and don'ts to let you experiment with the problems that you might have with a bad workflow. Let's create a two page, A6 landscape document with a small margin guide of 6mm. After creating the page, use Page | Manage Guides and add one vertical guide with a gap of 5mm and one horizontal guide with a gap of 20mm. In the first frame, import (File | Import | Get Image) the pict8-1.psd file. Notice that it is in a folder called Photos, with some other files that we will use in this document. When you're in the file selector, Scribus shows some file information: you can see the size in pixels (1552x2592), the resolution (72x72), and the colorspace (RGB). This image will look very big in the frame. Right-click and choose Adjust Image to Frame. It fits the height but not the width. Also we'd want the bottom of the flower image to be at the bottom of the frame. Open the Properties Palette and in the Image tab, select the Free Scaling option and change the X-scale and Y-scale up to 12 or a value close to it. Now it might be better. In the top right-hand side frame, import pict8-2.jpg and set it in the best way you can using the same procedure. Double-click on the frame and drag the picture to find the best placement. Then in the last frame of the first page, import the pict8-3.png file. You can add frame text and type something inside it, such as "Utopia through centuries", and set it nicely. On the second page we'll use one horizontal column without gap and one vertical column with a 5mm gap. On the right-hand part of the horizontal guide (use Page | Snap to Guides if you want to be sure) draw an horizontal line from one vertical to another bordering that one. Keep this line selected and go to Item | Multiple Duplicate. In the window, choose 4 copies, leave the next option selected, and define a 0.4 inch vertical gap. Here it is necessary to write the address. At the bottom left-hand corner of the same page, draw a new Image Frame in which you can import the pict8-4.tif file. In the Image properties, scale the image to the frame size and deselect Proportional so that the image fills the frame perfectly, even if it is distorted. Then in the XYZ tab of the PP, click on the Flip Horizontally button (the blue double arrow icon). We have now set our pages and the photos are placed correctly. Let's create some errors voluntarily. First, let's say we rename the Photos folder to PostcardPhotos. This must be done in your file browser; you cannot do it from Scribus. Go back to Scribus. Things might not have changed for now, but if you right-click on an image and choose Update Image you will see that it won't be displayed anymore. If you want everything to go well again, you can rename your folder again or go to Extras | Manage Images. You see there that each Image Frame of the document is listed and that they contain nothing because the photos are not found anymore. For each image selected in this window, you'll have to click on the Search button and specify which directory it has been moved into in the next window. Scribus will display the image found (image with a similar name only). Select one of the listed images and click on Select. Everything should go well now. (Move the mouse over the image to enlarge.) What just happened? After having created our page, we placed our photos inside the frames. By renaming the folder we broke the link between Scribus and the image files. The Manage Images window lets us see what happened. The full paths to the pictures are displayed here. In our example, all the pictures are in the same folder, but you could have imported pictures from several directories. In this case, only those being included in the renamed folder would have disappeared from the Scribus frames. By clicking on Search, we told Scribus what happened to those pictures and that those pictures still exist but somewhere else. Notice that if you deleted the photos they will be lost forever. The best advice would be to keep a copy of everything: the original as well as the modified photos. Notice that if your images are stored on an external device, it should be plugged and mounted. In fact, renaming the folder is not the only reason why an image might disappear. It happens when: We rename the image folder We rename any parent folder of the image folder We rename the picture We delete the picture We delete the whole folder containing the picture Giving the documents and the pictures to someone else by an e-mail or an USB-key will certainly be similar to the second case. In the first three cases, using the Manage Images window will help you find the images (it's better to know where they are). In the last two cases, you should be ready to look for new pictures and import them into the frames.  
Read more
  • 0
  • 1
  • 25097
article-image-processing-and-managing-participants-civicrm
Packt
31 Mar 2011
9 min read
Save for later

Processing and Managing Participants in CiviCRM

Packt
31 Mar 2011
9 min read
Using CiviCRM Develop and implement a fully functional, systematic CRM plan for your organization Using CiviCRM Processing and managing participants You've configured your event, tested it, and publicly promoted the online information page and registration form. Before you know it, event registrations start rolling in. Now what? As with so many other areas of CiviCRM, these records may be viewed collectively through search tools or on an individual-contact basis. In this section, we'll walk through the event registration as it is viewed through the contact's record and then briefly review importing participant records. Working with event registrations A contact's history of event attendance will appear in their Events tab. From this tab, you can view, edit, or delete an existing registration, or create a new registration for the contact. Notice that there are two buttons above the event history listing, namely, Add Event Registration and Submit Credit Card Event Registration. The first is used for registrations that do not involve real time credit card processing through the system. This may include free events, payments by check, cash, EFT, or a credit card that was processed outside of the system. The second button should be used if you will be processing a credit card directly through CiviCRM. If you have not configured a payment processor, or your payment processor does not handle integrated payments (for example, PayPal Standard or Google Checkout, which redirect you to an external site to process the payment), then that button will not be visible. The new registration form allows you to select the event you wish to register the individual for, the fees and fields specific to the event, payment options, and receipting options. The following screenshot shows the top half of the form: There are several things to note about this form: The event drop-down list will only show current and future events. If you wish to register someone for an event in the past, you must click the Include past event(s) in this select list link, which will reload the form with the full list of events. This is done to reduce any confusion and simplify the event selection process. If you have created any custom data fields attached to participants, they will appear and be available only when the selections match their "used for" criteria. For example, if you have created custom fields for the participant role Guest, they will only appear when you change the role on this form. If you have custom fields attached to the event type Conference, they will only appear if the event selected is associated with that event type. If you are expecting, but not seeing a certain custom field, make sure your selections match how that field is configured to be used. Directly below the event fee block is an option to record a payment with this registration. Checking the box reveals a series of contribution-related fields, as shown in the following screenshot: It is important to understand that an event registration in which fees are collected involves both an event participant record and an associated contribution record. While you could process these separately, we strongly advise that you manage them through this single interface. In addition to being easier than entering them separately (since you may handle both records at once), doing so creates a link between the two records. If you return at a later date to view this event registration, you will see the related contribution record summarized below it. Likewise, if you enter the associated contribution record, you will see the event record summarized below it. Revenue totals for the event in reports will also reflect the linked records. Entering them separately will not build that connection. Handling expected payments Inevitably, you will receive event registrations by mail, fax, or phone, in which payment has not been submitted with the event registration. Though you have not actually received the payment, there is an expected payment and consequently, the best practice is to enter the payment as a pending contribution. Use the Record Payment option to log the contribution, but do not complete the Paid By field, and change the Payment Status field to Pending. Why is this recommended? For two reasons: first, it captures the reality of the data better. You have received a registration that implies a commitment to pay. This is different from a registration for a guest, speaker, or other VIP attendee for whom you do not plan to charge a fee. Secondly, it provides better tools for tracking payments due. If each registrant in the above scenario has a pending contribution payment, you can easily run a search to find out the total due and process invoices or follow- up communication accordingly. In essence, it gives you a better overview of your actual financial position, and a clear data path to those who owe you payment. Registrations received through your public-facing event registration page will also have both an event and contribution record created. Pay later registrations will have contribution records with a status of Pending, indicating that a payment has not yet been received. When you receive payment, first record the details in the contribution record and change the status to Completed. Doing so will automatically change the status of the associated event registration record to reflect that the payment has been received. Note that the reverse action does not have the same effect: changing the status of a registration to completed does not likewise change the status of its associated contribution record. This supports situations where you want to allow people to attend the event (marked completed) even though they will pay after the event (marked pending). Before leaving the event record displayed within the contact record, we want to point out one additional feature. From the event tab, click on View to see the registration details. From this screen, you'll notice a button to create a name badge. Clicking on this option will direct you to a form where you select the template to be used and trigger the creation of a PDF file with the name badge. In the following Tracking, searching, and reporting section, we will review how to create name badges for all event participants in bulk. For now, it's useful to see how an individual name badge can be created. Importing participant records As with other areas of CiviCRM, the event functions include a tool for importing event registrations. This is particularly useful when you are initially migrating data from an external database such as MS Access or MS SQL Server. It may come in handy at times depending on your organization structure and how CiviCRM is being used. Let's say your organization consists of five chapters geographically oriented to cover the entire state. Each chapter hosts local events and handles all onsite management through volunteers. The registration process is centralized through the state-wide organization using CiviCRM, so the contact participant list is generated and e-mailed to the chapter coordinator the day before the event. Suppose some of these events allow walk-in registrations and others include continuing education credits that must have verified attendance in order to be earned. In other words, the organization must not only track if people have registered, but must also track whether they actually attended. You choose to handle this by sending a .csv (comma-separated) export file the evening before the event to the chapter coordinator. The coordinator welcomes people as they arrive at the event and uses spreadsheet software to mark each person who attends in the .csv file. They add new rows for walk-ins. That file is sent back to the main office at the conclusion of the event and is imported into CiviCRM in two steps: existing registrants are imported using the Update option, where the participant status value reflects who attended; and the new registrants are imported using the Insert option, and then are matched with existing records using their name and e-mail. The import tool is very similar to what we saw in other areas. The four-step wizard consists of loading a file and configuring the import settings, mapping the file's fields to CiviCRM fields, previewing the results, and completing the import. An error file will be generated and will be available for download if any problems are discovered with any records. To access the import tool, visit Events | Import Participants. There are a few things to note that are specific to importing events: Participant import only accepts .csv files. You cannot connect to a MySQL database as with the contact import. The most significant difference between importing participants and importing contacts is the behavior of the Update option for handling duplicates. The Update option requires the presence of an existing participant record, which must be identified using the unique participant ID value. Consequently, you will only use it in scenarios similar to the one we just discussed, where the participant list is exported from the system, changes are made, and it is then imported back into the system. If the Update option is used, CiviCRM will not process new registration records. In this way, it differs from the contact import that matches and updates existing records and creates new records for those that do not match. As one might expect, the field mapping options available for the participant import include a number of registration-related fields. Take note of those in red, as they are required in order to successfully import these records. They include the Event ID and Participant Status. The former can be obtained from the Manage Events page. Several of the fields highlighted in red are used for matching to the contact record. Not all of these are required; only ones sufficient for making a valid match are required. For example, you do not need the internal contact ID and the first/last name. Either of these is sufficient for making a match.
Read more
  • 0
  • 0
  • 2597

article-image-tips-and-tricks-process-modeling-open-text-metastorm
Packt
25 Mar 2011
3 min read
Save for later

Tips and Tricks for Process Modeling in Open Text Metastorm

Packt
25 Mar 2011
3 min read
  Open Text Metastorm ProVision® 6.2 Strategy Implementation Create and implement a successful business strategy for improved performance throughout the whole enterprise     Identify and engage the process owner Tip: Identifying and engaging the process owner in the improvement process is important. Unless the process owner is fully committed to process improvement, it will not happen and any attempts will be met with resistance. Failure to engage a committed process owner guarantees failure to improve the process in the long term. The people who understand the process the best are those engaged in it, either as customers, suppliers, or staff who run the process. Getting engagement from everyone who has responsibility is the best way to deliver transformation. The appreciative inquiry method is an innovative way of doing this, by engaging individuals in organizational renewal, change, and focused performance through the discovery of what is good in the organization. This leads to my second top tip.   Talk to the people who deal with errors Tip: It is good to engage the process staff from the beginning, especially those who fix mistakes. Ensure their participation in the process improvement. They know what is necessary to fix the mistake, so they can help design to prevent mistakes from occurring. Managers frequently do not know how work is really done. They may think they do, but in reality the work is often done other ways. Look for informal processes based on relationships and local knowledge, which are often more important and effective than the formal, documented process. Capture the current "What" in detail but not the "How" Tip: The two most important aspects of a process are what and how. The question of what is the information that is required to run the process has its answer as data. Whereas, the question of how is value created and enhanced has its answer as a process. If the current process is broken, the new process will probably reuse the data but not the process itself. It is only necessary to model the current process at a high level. Ask why at least five times to get to the root cause of the process problem.   Reduce moments of truth Tip: The customer judges a process on the experience when they engage with the organization. For any given transaction, strive to limit the number of contacts with a client. Think of eBay's one-click process for ordering a book. Isn't that preferable than filling out a long form? Seek to minimize these moments. Add value to the customer at these moments while reducing effort for them; in short simplify any process for your customers. Reduce handoffs Tip: Many problems occur at hand-off from one person to another. The fewer the handoffs, the less opportunity there is for delay and miscommunication. Where handoffs are essential, you can consider parallel processing rather than sequential.
Read more
  • 0
  • 0
  • 1501
Modal Close icon
Modal Close icon