This chapter provides background on the meaning and history of Enterprise Content Management (ECM). If you are just getting started with ECM, this chapter will give you some interesting context with which to approach both the rest of the book and your upcoming ECM experiences. The chapter is short enough that we suggest you read it straight through (if you are an old hand at ECM in general, then we probably don't have to tell you what parts you can read over lightly in this chapter).
When we joined FileNet in 1998, there was no such term as ECM. Every vendor and analyst had his or her own terminology, generally some sort of variation of document management. These days, it seems like every experienced IT professional must be conversant in ECM terminology. Sometimes the familiarity is genuine, and sometimes it's merely a blizzard of buzzwords. Although ECM is a standard industry term these days, it's not always clear what it means except by example. By the end of this chapter, you will be able to explain it in simple terms.
This chapter contains the following topics:
Introductory definition of ECM
Some motivating use cases for using ECM
Most important ECM product features
Historical and emerging ECM-related standards
Things commonly confused with ECM
ECM is automation for providing essential control of access to information, vital to the operation of an organization. Control isn't just about limiting access, though that's an important part of it. It's more about being organized in your approach to finding, collecting, storing, and retrieving content, regardless of the specific applications you eventually choose to use for those aspects. It's called enterprise content management because its scope is not limited to a single department or division. It is not merely an application, but a complete platform for supporting disparate applications, information sources, and processes. In this case, content means so-called unstructured contentâdocuments and objects of various sorts that don't have the easy-parse luxury of highly-structured data. There certainly are important structured data aspects to unstructured content, as we'll see in later chapters, but the emphasis is to move beyond transactional business data.ï»¿
The term "enterprise" does not have to mean a commercial business organization. ECM solutions are also widely deployed in local, national, and international government organizations, and volunteer and non-profit organizations. What all of these enterprises have in common is a need to manage their content, usually at scale, to meet organizational objectives. Whether those objectives are called business objectives, compliance requirements, or something else, it is clear that they translate to the same things at a technical level.
We will use the terms "enterprise" and "business" interchangeably in this book. ECM concepts are seldom limited to commercial entities.
The vision of ECM is to use a strong IT infrastructure to harness content that is already in use throughout an organization. Once content is under centralized control, it must then be made available for use by a variety of users, technical and non-technical, for both ordinary and extraordinary needs of the organization. You must have both halves of this picture. The business cannot reliably use content if it is incomplete, incorrect, or not readily available. On the other hand, locking content into an IT fortress is of limited value if business users cannot access it for the legitimate needs of the enterprise.
Over the years, precursors to ECM moved gradually from nothing at all, to local groups managing content, to departmental point solutions, and on to enterprise roll-outs of true ECM platforms. The goal of ECM is to do for unstructured content what relational databases long ago did for structured content.
Before the widespread use of general purpose database software, applications devised their own means for storing transactional data. An enterprise might develop a reusable software component to serve the needs of multiple applications, but this still left the data isolated. The development of application data storage containers is not the core business of most enterprises. When relational databases came along, it was easy to see the benefits of using them as a standardized platform component. New applications, utilities, reports, and so on, could be easily written without disruption to the existing body of applications because the storage and retrieval aspects were delegated to the database.
Though databases are an important component of ECM platforms, unstructured content has additional challenges that an ECM platform addresses. For example, there are often more elaborate referential integrity constraints and more fine-grained security requirements that lie outside the design "sweet spot" of a relational database.ï»¿
To get an immediate feel for some of the problems that ECM can solve, let's look at a few typical use cases. These are just examples of popular scenarios to give you some concrete idea of what ECM is all about. There are certainly many more that are not covered here.
It seems pretty obvious these days that there is a benefit to centrally managing business documents, but it was not always so. Early systems for centralized management tended to also mean giving up control of your documents to that central authority, and it isn't always obvious whether that's a good idea. Modern ECM systems focus on centralized technical control (secure storage, reliable backups, high availability, and so on) while leaving business control of the information in the hands of the appropriate users.
In the early days of electronic documents, if you needed to see the latest copy of a document, you tracked down the author and asked for it. That system had a couple of weaknesses. You couldn't get a document from Floyd if Floyd was out sick or on an airplane or just unavailable (Floyd might also just get tired of being asked). You also couldn't get a document from Floyd if Floyd himself lost track of his document or if Floyd's hard drive failed.
File sharing was among the first techniques for overcoming these problems. Someone, perhaps IT or perhaps the local computer-savvy user, set up a widely-accessible shared directory. Various users could "publish" documents by placing them in the shared directory. Although this solved some of the original problems, it brought with it other problems. Beyond the use of a few users, the organization of the shared directory tree could become quite chaotic. This is especially problematic if there are multiple versions of the "same" document in the shared directory. Some of these copies could come about from the master copy of the document being revised by the author; others would come from different users making their own safe copies in the shared directory. Conventions for subfolders and file-naming are easy to invent, but it takes a lot of user discipline to keep up with those conventions over time. Perhaps the biggest problem with this technique is that metadata (properties that describe content) is limited to what the underlying filesystem provides, and that is usually limited to simple creation and modification bookkeeping.ï»¿
The early techniques for content management may have lacked finesse, but the motivation behind them was sound. An organization of any size needs a reliable system for maintaining a "single version of the truth". That is, you should be able to get the information you need and be confident that you are not using an out-of-date copy, an unpublished draft, or an otherwise unofficial version. An ECM system helps in two major ways:
First, it facilitates your technical ability to access the information whenever you decide and from wherever you decide
Second, it facilitates the bookkeeping for current, past, and in-process versions
When these two major factors are reliably provided, users readily acknowledge that the ECM repository holds the master copy of content. They see immediate benefits for themselves and for the enterprise, and their willing surrender of local content to the repository multiplies the benefit over time.
It would be nice if you could decide what documents were important to your organization. You'd identify them, give them the proper attention, and waste no resources on other things. Today, few organizations can take that approach. Laws and regulations require businesses and other organizations to keep more and more information about the decisions they make (or fail to make). Even if you are not subject to formal regulatory compliance, you probably find it necessary to exercise control over business documents as a necessary contingency for the possibility of court proceedings.
It may seem sufficient to simply institute some sort of best practices for the handling of various kinds of documents. Perhaps you could also institute an annual employee sign-off that they were aware of those best practice requirements. Unfortunately, in most cases, that approach is no longer acceptable. In the eyes of regulators, auditors, court judges, and other outsiders, you must not only do proper record-keeping but also be able to prove that you did it properly. Leaving things up to individual responsibility sounds great, but it leaves you with a lot of risk.
The primary risk in compliance and litigation cases is that your enterprise has acted inappropriately. There is a secondary riskâthat you have not followed rules or best practices for keeping records of what your enterprise has done. An ECM solution can reduce this secondary risk by automatically doing a large part of your record-keeping. Not only will your repository securely hold your master copy of some particular document, but it will also hold a tamper-proof copy of the entire revision history of the document. Security access can be adjusted as the document moves through various phases of its life cycle. Finally, when the document has reached the end of its usefulness, it can be automatically and definitively purged from the repository.
An early scenario for content repositories was to streamline business procedures for handling the documents put into those repositories. That may sound a bit circular, but what we mean is "An enterprise has business processes that are central to its operation". Automated handling of those processes, Business Process Management (BPM), that is related to a piece of content is only reliable if the content itself is reliably available. A content repository serves that role nicely.
A common example is that of a mortgage broker processing a large number of applications for loans. In a typical loan processing scenario, dozens of separate documents, some seen by the applicant, some purely internal, must be gathered from different sources before the final decision can be made to grant the loan. Those documents include property appraisals, income verification documents, title insurance policies, and various internal documents supporting the underwriting process.
It is a daunting task just to keep track of the status of the comings and goings of those documents, a great number of which are faxed or scanned images. Add to that the assembly of those documents into packages for various decision-making steps and you start to see a very busy highway of information flow. The applicant might call at any time to supply information or inquire about status. It is completely unrealistic for a customer service agent to track down physical copies of documents. It is often the case that the people using the documents are in different physical locations, separated by anything from alleyways to oceans.
Document-centric workflow is an industry term for that part of BPM concerned with routing and processing documents. The documents are typically part of some decision-making flow, as in the mortgage example. In an ECM solution, the document itself lives in a repository, and the workflow system contains a link to it. This maintains the advantage of the master copy concept described in the first use case, and it also allows fully-electronic and automated handling of the business process itself. The actual document is accessed as needed, and the workflow will typically update properties of the content in the repository with the results of decision-making or processing steps.
There are a lot of different notions about what specific things go into ECM and what are merely "something else". Every vendor has its own prioritized list, generally based on what its own product's strengths are. This section is a list of things that we would absolutely look for if we were selecting an ECM system (and please pay no attention to the fact that we might have an IBM employee discount). Except for the first item, the list is not in any specific order of importance. All of these things are important for any ECM platform. There are dozens of additional features that could be mentioned which are important for particular scenarios.
This section might seem to border on salesmanship because most of these points are strengths of FileNet products. Have we selected them on that basis? We prefer to think of it as having made sure the products have the features that we independently think are important.
There are several other players in the ECM arena, most of them, by definition, FileNet and IBM competitors. We'll let them speak for themselves (and write their own books). We will note, however, that some of those vendors offer point solutions rather than a comprehensive platform. A point solution is optimized for some particular task, use case, or scenario, but it does not necessarily have what it takes when your needs grow beyond that. Sometimes the answer is to stitch together a constellation of point solution components in the hope of making a well-running whole. Other vendors may offer products with a wider range of features but which suffer in scalability or usability as more and more applications are added or more and more parts of the organization participate in ECM. We'll get off our sales soapbox now and let you come to your own conclusions.
The number one priority has to be a well-constructed repository that will not lose anything put into it. This may seem like a blindingly obvious requirement for any modern IT system, but there still exist solutions whose guarantees in this area bear scrutiny. To protect against loss of information, an ECM repository will provide things like system-enforced referential integrity checks, fully-transactional updates, and mechanisms to prevent one user's changes from accidentally overwriting another user's changes. We're obviously talking about things well beyond healthy hard drive platters.
Access to the ECM repository must support authentication that is comparable to the best authentication mechanisms used elsewhere in your organization. Likewise, it must support authorization checks at the level of individual items and types of operations. For example, giving someone the authority to update a document should not automatically give them the authority to delete it. Having the authority to delete a document should not automatically mean you have the authority to delete other documents.ï»¿
Permissions for items in the repository must be settable with enough granularity that you can accommodate unique situations, but it must also have a workable defaulting mechanism so that the mere setting of permissions doesn't become a burden. The security aspects of the ECM system must not only keep the bad guys away, but it must also avoid putting up a barrier to the good guys.
It's tempting to say that the ECM system must run on a wide range of hardware and software platforms, but it's more realistic to say that it must run on the platforms that are important to you. Unless you already plan to use a variety of different platforms, then support for a variety of platforms only matters directly in providing you choices as you might evolve your infrastructure over time. Indirectly, support for many platforms is one of the factors that can help you develop a feel for whether a particular vendor has a breadth of technical expertise.
Anyone can build a solution that performs adequately in a development environment by banging a couple of rocks together. A real test, and the real requirement, is that it performs well under the production load you expect to have and beyond that. Up to a certain point, you will be able to add more load by using faster servers, adding more memory, and so on. There will definitely come a point, however, where the "bigger and faster hardware" approach will reach its limits. How can your ECM system cope beyond that point? You will need an architecture, in the product and in your own deployment, that scales across multiple servers. In the ideal case, there should be no architectural factors that limit scaling in any practical sense; you should, for example, be able to add servers into clusters to scale to any arbitrary load that you might someday encounter.
It's an easy bet that the ECM needs you have today will not be the same as the ECM needs you have in a year or two. It's a common phenomenon that organizations like the benefits of their first tastes of ECM so much that they want to expand their early efforts into wider and wider areas. You want an ECM system that is ready to grow when you are.
Look for an ECM system from a vendor who actively fosters a rich partner and third-party ecosystem. Partners can provide unique applications, integration services, and specific knowledge about particular business scenarios. Sometimes this complements the vendor's own capabilities and sometimes it even competes with them. For someone who is using an ECM system, it pays to have options.
Your ECM system should interoperate well with the rest of your enterprise infrastructure. That means things like using your enterprise directory for authentication, supporting the way you run your datacenters, perform backups, and operate high-availability and disaster recovery configurations. If your enterprise has IT infrastructure in multiple geographical areas, the ECM system must adequately support a distributed deployment.
Even if you do not plan to develop any custom applications yourself, you want an ECM system with feature-rich, well-supported, well-documented APIs with a record of stability. You or a third party might need to write some "glue" code to integrate other enterprise systems, or you might someday decide to augment out-of-the-box applications with something that is unique to your organization. In any case, the existence of strong APIs indicates at least two thingsâthe vendor is willing to provide a mechanism for the necessary customization and integration of its product, and the vendor understands that they are providing a platform. There are ideas for applications and implementations that go beyond those that the vendor provides. Even if you never plan to use them directly, the quality of the APIs tells you something about the nature of the overall ECM system. It is also an important factor in the vendor and partner ecosystem mentioned above.
Would you like to know when certain types of documents are created or updated? Maybe you have automated follow-up steps that you want to perform in such cases, or maybe you just need someone to take a look. If you only had one application talking to your content repository, you would have no need for the ECM system to provide notifications or triggers when things happened. Your application would simply pay attention and make the notifications itself. You will have many applications as your use of ECM grows, and it's not good design to keep rebuilding notification logic into all those applications. Instead, you should look for that sort of feature from the ECM solution itself, and it should be suitably configurable for your specific needs.
As documents are revised, you want to be able to keep track of changes that have been made. You want to be able to retrieve a past version. You want to be able to find out who made the change, when it was made, and so on. It should be your decision about how many past versions are kept, and your needs may vary from document to document or from type to type.
There are many scenarios for applying a workflow to a document, launched perhaps on document creation or update. It should be easy to make such calls to your workflow system. This should not only be functionally easy, but it should be highly performant. A tight integration promotes both.
There should be features to search for content on arbitrary criteria. Given one item that you have already located by whatever means, it should be easy to navigate to related items. The meaning of "related" can be different in different contexts. It will sometimes be determined by applications and sometimes by individual users. You don't want a system that only supports predefined relationships.
For regulatory and other compliance reasons, as well as for plain old good stewardship, you may need to be able to say who had certain access when. Perhaps just as importantly, you need to be able to tell who tried to do something but was turned back by security access checks.
Different ECM platforms have different terminology for metadata. It means the accumulated data about the contentâwho owns it, when was it last changed, how big it is, and so on. You should be able to extend the built-in metadata with your own, and there should be various handy data types available (for example, integers, dates, strings).
There have been many attempts to create standardized interfaces and APIs for ECM systems over the years, including several that predate the term ECM. ECM is no different from other areas of the computer industry in that the availability of standards helps customers and independent software vendors create applications and add-on components that will work with more than a single vendor's products. The downside is that cross-vendor standards often cater to only a core set of common features or (worse) provide for a wide range of optional features that vendors may freely choose to implement or not. This can place a burden on application writers when they want to exploit a feature not available in every supported ECM product.
This is a brief survey of some of the standards you may see mentioned, but it is not meant to be exhaustive. Although having a longer list of implemented standards is generally better than having fewer, what really matters is whether an ECM solution implements standards that are important to you.
Several of these standards are of only historical and contextual interest. We haven't given web links to them, but have tried to use precise terminology and document titles so that you can readily track them down for yourself if you are interested in more detail.
All of the standards on this list were born of high hopes, but those hopes have not always borne fruit. As a disclaimer, we should mention that we personally participated in some of these efforts. As we prepared this list, it was striking to us how many different standards organizations have touched this area.
Perhaps the most widely-known industry organization related to ECM is the Association for Information and Image Managementï»¿ (AIIM). The predecessor organization that became AIIM was founded in 1943. AIIM's mission is to promote standards, provide education, foster best practices, and generally serve as a clearinghouse for ECM-related matters. It also hosts several recurring ECM-related conferences.
Starting in the mid-1990s, AIIM served as the secretariat coordinating the development of two related standardsâDMA and ODMA.
The Document Management Alliance (DMA) was the name of a group of cooperating organizations that developed an AIIM-sponsored standard. All major players in the document management industry at the time participated to one degree or another in DMA.
The standard, released in 1997, was also named DMA and was intended to provide two major thingsâan architectural model for how a document management system would interact with other components, and a set of standardized interfaces. Although DMA compliance is seldom mentioned in requirements specifications these days, it continues to have influence in many current ECM products in terminology and architectural concepts.
The Open Document Management API (ODMA) was a set of standardized conventions and APIs, and a software development kit allowing desktop document management applications to manipulate documents from multiple vendor repositories. The intent was to make the user's view of repository document access as simple as accessing documents on a local filesystem. ODMA 1.0 was published in 1994, and ODMA 2.0 was published in 1997. Today, ODMA is pretty much of only historical interest. Other, more modern, application integration technologies are in reasonably-wide use. As with DMA, it has a legacy that lives on in some vendor products as commonly used terminology and architectural concepts.
RFC-2518, HTTP Extensions for Distributed Authoring -- WEBDAV, was published as a standards track document by the Internet Engineering Task Forceï»¿ (IETF) in February 1999. The "V" in WebDAV stands for "versioning", but the first standardization effort was scaled back and did not include it. Versioning was addressed by a subsequent effort. RFC-2518 has since been made obsolete by RFC-4918, HTTP Extensions for Web Distributed Authoring and Versioning (WebDAV), published in June 2007.
You can tell from the name that WebDAV was an HTTP-specific access mechanism. It defined protocol-level extensions for resource access. Many scenarios are possible, but the first use case people think of for WebDAV is desktop applications accessing a document repository (the other motivating use case, source code management, drove many features in the WebDAV specification, but it wasn't really feasible to realize it with the first version of WebDAV since it lacked versioning support).
Early WebDAV implementations were plagued with incompatibilities between vendors. Things have mostly settled down these days, but if you have a requirement for WebDAV support, you would be wise to ask your ECM vendor about support for the specific client applications you are using. If you have a need for versioning support in the WebDAV implementation, you should ask about that too. Some WebDAV implementations do not provide it.
Content Repository for Java Technology API was published in 2005 as JSR 170 under Sun's Java Community Process. It is commonly referred to as Java Content Repository (JCR). A follow-on version of JCR was published in 2009 as JSR-283, Content Repository for Java Technology API Version 2.0. JCR is (obviously) Java-specific technology. It specifies a set of standard interfaces that a repository vendor can implement to provide JCR access to a repository.
A cross-vendor initiative, Content Management Interoperability Services (CMIS), was announced publicly in September 2008. Many ECM solution vendors announced, then or since, intentions to implement CMIS access to their repositories. In fact, there are already many CMIS implementations available. CMIS itself underwent standardization at the Organization for the Advancement of Structured Information Standards (OASIS). The final standard was ratified in May 2010, and we can expect to see several mï»¿ore vendor implementations (or updates).
CMIS exposes access to content repositories as a collection of related RESTful APIs and web services for common ECM needs. That makes it technology-neutral for the calling clients. It's also well-suited to a modern design paradigm of calling into an adapter layer for a specific repository. The hope is that CMIS can serve as that repository-specific adapter layer for most everyday purposes and need only be supplemented for truly unusual operations. We're far from the first to say that CMIS hopes to be for ECM what SQL is for databases.
Let's digress into a discussion of a few things that ECM is not. We're making this digression because these are common points of confusion.
There are a lot of software products available, both open source and commercial, that fall into the category of Content Management Systems (CMS). You can find references to hundreds or thousands of them with a simple web search. There's a good chance that you are already familiar with one or more of these, and there is even a good chance that you think one or more of them is fantastic. We won't disagree with that. We've used a few open source and commercial CMS implementations, and some of them are really quite good at what they do.
CMS is not ECM, although it is not unusual for an ECM platform to have components very similar to those of a CMS. In such cases, you might be able to use your ECM platform as a CMS, but you will not generally be able to use a CMS as an ECM platform.
A typical CMS consists of a single or small collection of applications and a backing database. Almost all are self-contained systems aimed at organizations who want to have a secure and controlled process for publishing material on websites. The applications consist of web-based content editors and page layout engines. There will be built-in collaborative workflow for a content approval cycle, typically with some kind of role-based permissions system.
The most well-known examples of CMS are highly-tuned applications for managing the lifecycles of web content with a minimum of technical knowledge for the assorted writers, editors, and approvers. The difference between these CMS applications and an ECM platform is in those very wordsâa CMS is often more like a single application than a system of anything. There may or may not be points of integration with other applications.
It's not a particularly unusual reaction for someone to look at all the things that make up a typical ECM platform and conclude that they could do it faster, simpler, cheaper, or better by just creating a relational database with pointers to content files in a filesystem and a few web pages to act as the frontend application. It often seems to them that much of the complexity and size of ECM is self-induced. If only things were limited to the simple things actually needed, the whole thing would be a lot smaller and tidier. Perhaps you, the reader, are thinking that very thing.
Well, why should you not do that? If you are something of an ECM visionary with enough like-minded colleagues, you very well might find success in doing it. Otherwise, you are statistically very likely to spend a lot of effort building things that are eventually replaced by an ECM system when the maintenance burden and backlog of things to do become too much to bear. If you're a stubborn person, please don't take this as a challenge! Our real aim is to get you to your goals as soon as possible without making you travel through the purifying fire of a gnarled custom implementation.
If you have no other document management system in your organization, this sort of database application will be greeted warmly. The first thing you know, someone will ask you to add a feature for keeping track of multiple versions of a document as it gets revised from time to time. Someone else will ask you for a mechanism for sharing a particular document with a specific set of people. Yet another someone will ask if they can organize the documents in something that looks like a folder structure so that they are easier to find. Someone will ask if you can automatically notify someone when certain kinds of documents are modified. Early success will lead to quite an imaginative list of features to implement. Although you may have thought of some of them at the beginning, you probably didn't have time to implement them, so they just went onto your "to do" list. Sooner or later, the same people who were slapping you on the back to congratulate you for your early efforts will be ready to wring your neck for not yet implementing the things they asked for.
This custom approach is certainly possible and relatively straightforward for almost any given single document management task. In fact, it's quite educational to do so because it can help you understand your own needs more clearly. For example, it's easy to write a web application that allows many people to upload document content for storage in a secure location and with which they can later download those documents. A single database table with only a few columns can do all of the bookkeeping of document ownership, keywords, on-disk location, and so on.
Just as most organizations would not seriously contemplate creating their own relational database system from scratch, the ECM landscape is rich enough that it seldom makes sense to build it yourself. Of course, once you have a target ECM platform, it often makes sense to develop your own custom applications on top of it, just as it is routine to implement database applications on top of an off-the-shelf DBMS.
It's hard to escape the parallels between some fundamental ECM platform features and those offered by source code management (SCM) systems. The feature of a secure, centralized repository of multiple-versioned documents comes immediately to mind, as do several others. Software development organizations certainly regard the management of source files as critically as business users regard the management of spreadsheets (in our experience, experienced professional software developers tend to regard it even more critically, perhaps because they understand the impact of making colossal mistakes).
Is it feasible to build ECM out of SCM? It's probably technically possible, assuming you select an SCM system that has the scalability, security, and other features you are looking for. It's probably also a lot of work, though.
The design centers for the two types of systems are different. For most organizations, the volume of documents handled by an ECM system will be orders of magnitude larger than those handled by SCM. Tools available from the vendor or third parties for an SCM system will be well-suited to the lifecycles of software development artifacts, but they may be non-existent or poorly suited for use by typical business users. For example, an SCM is likely to have reasonable integration with integrated development environmentï»¿ (IDE) tools used by developers and testers, but it's unlikely to have any integration at all with office productivity applications.
Once again, you will find yourself creating or customizing tools and features from scratch. It's not impossible to implement ECM on top of SCM, but it's not really the right tool for the right job. In the same way, it would probably be quite a bit of work to implement SCM on top of an ECM platform because a lot of the features of SCM are in the applications (or what would look like an application to ECM).
FileNet was founded in 1982, and the first product was an imaging solution that capitalized on newly-available optical disk technologies. Because of the state of the industry at that time, FileNet's first offerings were more complete and self-contained than seems imaginable in today's environment. They included custom hardware for the scanners, the storage modules, and the user workstations. They also included custom operating system software, custom network protocols to connect the various components, and an innovative application called WorkFlo that eventually evolved into what we now know generically as workflow.
As the industry evolved, so did FileNet's products. While maintaining its heritage of industry leadership in imaging solutions, the company moved into the area of document management. In 1995, FileNet acquired document management company Saros and, in the late 1990s, launched Panagon, an integrated suite of products that included classic imaging, document management, and workflow. In contrast to the early years, these products did not rely specifically on proprietary hardware and low-level software.
In 2001, FileNet announced a new product line called Brightspire. Brightspire was a completely new technology stack with a Content Engine (CE) based entirely on Microsoft platforms and technologies, though it did also include Java APIs. Within a couple of years, the platform component was renamed to P8 and the content-related product was renamed to FileNet Content Manager. The CM 4.0 release was significant in introducing Content Engine Multiplatformï»¿ (CEMP), a rewrite of the CE as a J2EE-based application.
FileNet was acquired by IBM in 2006, and the product line became known as IBM FileNet Content Manager. As you can see, with only a few turns, CM is a mature product that evolved in-house and is not an amalgam of unrelated parts from opportunistic acquisitions.
As of this writing, the prevalent release in the field is IBM FileNet Content Manager 4.5.1, and the remainder of this book is based on that release. CM 5.0 has just been released, and we will mention some of its features throughout later chapters.
In this chapter, we gave you a whirlwind tour of ECM, including dispelling popular misconceptions. We covered some historical underpinnings of document management along with past and present standardization efforts. We covered the essential features that you should look for in any ECM platform.
From this point, we will be moving immediately into the practical matter of getting your FileNet system up and running. Chapters 2 throughChapter 6 will guide you through installing a complete, standalone CM system and give you an overview of the main applications.