31 May 2012

How not to set your IT budget

If you have read my posts in the past, you will know that I advocate the use of the following formula for determining the ROI for any given improvement project (whether IT-related or not):
TOC ROI
Where

Incidentally, where there is no change in I (Investment, including changes in inventory) or the change in I is negative, then projects can be compared based on profit alone. That formula is simply:
Profit = delta-T – delta-OE.

However, here’s what far too many IT project’s ROI calculations look like:

ROI (don’t know) = ((never took time to estimate it) – (never took time to calculate it)) / $200,000
 
The only figure the company knows going into the project is the estimated “investment” or “cost” of the project.

The common excuse

The common excuse for not calculating an ROI for an improvement project is that changes in Throughput and changes in Operating Expenses are “too hard to estimate,” and “if they are estimated, they will be wrong anyway.”

This argument is specious on the face of it. Think about it!
The $200,000 estimated “cost” or “investment” value of the project is likely to be wrong, too. But that does not keep the CIO and CFO from making their best efforts to calculate that value.

The Real Reason

Of course, the real reasons that CIOs and CFOs do not take time to calculate a real and measurable ROI for their IT (and other) improvement projects is likely two-fold:
  1. Too many CFOs and CIOs are under the wrongheaded impression that the value of IT (or other improvements) is both “automatic” and “cannot be measured.” When it comes to new technologies they have succumbed to the strange notion that new technologies are like an engine additive for business—you just pour them in and somehow your business will run smoother, faster, longer and get higher mileage! And, just like people who buy engine additives, they never take time to calculate whether there was any real benefit from using the product.
  2. They have never taken time to actually determine what root-cause they are attacking with the IT (or other) improvement project, so they do not really know whether the project will actually lead to increased Throughput or will, in fact, drive down or hold the line on Operating Expenses. In fact, they probably do not even know what the “weakest link” is in their customer-to-cash stream or whether that weakest link is internal to their organization or whether it lies somewhere outside their organization in their supply chain.
Isn’t it time to stop that kind of folly? Can businesses still expect to thrive and grow without taking a sound look at how and why they are spending their most valuable resources—time, energy and money?

I don’t think so.

image

14 May 2012

Dynamic Buffer Management (DBM) for the Supply Chain


Here is the presentation I made to the RKL eSolutions ERP User Group in Lancaster, PA, on Friday, 11 May 2012. Please contact me directly via the link below if you would like a copy of the accompanying white paper, as well.
image

04 May 2012

Misleading allocations and how to fix it–Part 2

[This is a continuation that will make very little sense to you if you don’t go back to read Part 1. Sorry.]

ACTIVITY-BASED COSTING ALLOCATIONS

Well, the partners were disappointed with these results, for sure. So, they decide to try Activity-Based Costing (or ABC) allocations. The administrative overhead is allocated based on their analysis of the amount of activity that the partners must undertake with each job type.

image

The ABC allocation of non-administrative overhead was done based on production-hours ($9,000 divided by 1,000 hours = $9.00 per production-hour).

The results of the partners’ new calculations (based on the historical product mix) are shown in below where you will note that company profit remains the same ($4,100 per month).

image

However, new priorities emerge: now the most profitable jobs appear to be landscaping (at $35 per job) and gutter guards (at $28 per job).

Based on these data, the partners rearrange priorities to allocate resources (i.e., the 1,000 hours or production time available) to capture the available markets for these job-types first. The results of this change in priorities may be seen in the following table:

image

Like the previous example, at first things look good: “calculated profits” boost to $7,924, but after subtracting overhead not absorbed (by abandoned job-types), the results are disappointing. Only $1,300 per month in net profits.

image

HOW TO FIX IT: THROUGHPUT ACCOUNTING VIEW

Throughput accounting eliminates all allocations except those that are truly variable with the changes in revenue. Typically, those costs are things like raw materials, commissions (maybe), outside processing costs, piece-rate labor—but not much else.

When you look at these Throughput Calculations, you will see two critical factors:

image

  1. Throughput per Job (Revenues less Truly Variable Costs or TVCs)
  2. Throughput per Constraint-Hour (Throughput divided by the time used on the constraint—in this case, the 1,000 hours of production time from the workers is the constraint to making more money)

So, looking at the Current Business and Profitability, you will see that another column as been added that represents the company as a whole or “the system.” Throughput is totaled across the enterprise into this column and then operating expenses are deducted from Throughput.

image

“Direct Labor” is not included in TVC and is included in Operating Expenses. Why?

Because in most organizations, so-called direct labor is not a TVC. Many times the payroll expense for labor will be the same whether the firm produces 10,000, 12,000, or 8,000 widgets in a month. Not to mention the fact that the payroll for “direct labor” (falsely so-called) sometimes includes payments for PTO, training or other non-productive time.

Note, again, that using Throughput Accounting, we still get the same net profit calculations ($4,100 per month).

Now, with this new information in-hand, the partners decide to prioritize sales and production to capture the market in order by T/C-Hr (Throughput per Constraint-Hour) until they run out of constraint-hours (i.e., the 1,000 hours available to them each month). The results of these new priorities are shown in the table below marked as Revised by Throughput per Constraint-Hour.

image

Wow! Profits are boosted 230 percent—to $9,410 per month or $112,920 annually—after fully covering all of “the system’s” overhead. In this case, they sought out and captured the 250 plumbing jobs available to them in the market as a top priority. Their second priority was to capture the 145 gutter guard jobs available to them. They had a few of the 1,000 hours left, so they were able to also do 16 window cleaning jobs.


Hopefully, this helps you see two things:

  1. The inherent dangers in believing data coming from an ERP manufacturing (or project accounting) system where the profit figures are clouded by allocations of overhead.
  2. The simplicity and clarity provided by looking at your clients’ organizations as “a system” and helping them view their goal as optimizing the entire “system,” not trying to make decisions based on data that may imperfectly represent “system” performance.

Let me know if this is valuable to you. Thanks.

clip_image0063

clip_image0083

03 May 2012

Misleading allocations and how to fix it–Part 1

Two things about which I warn my clients who buy manufacturing software are these:

  1. Manufacturing software is capable of capturing, storing and reporting on reams of data
  2. If you are not careful, you will find yourself taking “as fact” the data produced by the system and being mislead in your decision-making

Why is this so?

Because ERP systems allow the users to create allocations of overhead based on manufacturing “drivers.” In Sage 500 ERP’s case (as shown in the screen image below), the chosen driver is “labor hours”—for run time and set-up time.

clip_image002

In the Sage 500 ERP Set Up Work Center screen there are places for “Fixed Setup” costs and “Fixed Run” costs. The values placed here are used to absorb “Fixed” overhead costs at the rate supplied based on each hour of “Setup” or “Run” time calculated for production utilization of the Work Center.

The problem is that these “absorption rates” must be calculated based on historical (or prognosticated based on expected future) utilization rates of each Work Center. These calculations must make assumptions about product mix, work center utilization rates and operating expense levels. As soon as any of the these factors change

  • Product mix
  • Work center utilization rates
  • Overhead expenses

The data supplied by the calculations will be wrong.

And, since either the product mix or the total of operating expenses will certainly be different than the numbers used in the calculations, the data resulting from the calculations will (virtually) always be wrong.

A simplified example

image

We are going to look at two different allocation methods and the decisions that might be derived from such calculations.

  • Standard overhead allocations by Job (equivalent to allocation per work order in a manufacturing operation)
  • Activity-Based Costing (ABC) allocation based on production hours

In order to make the allocations easy to follow, you will see that the company is a service company and that the firm has three partners (administrative overhead) and some relatively fixed overhead in the form of vehicle leases, maintenance and so forth.

The direct labor (production labor) comes from five employees who—to make it simple—all work exactly 200 hours per month and all make exactly the same rate—$10 per hour. This also gives “production” a known capacity—1,000 hours per month.

image

The partners have kept good track of their history over the last six months and have also done enough market research to have a good handle on the size of the market they are serving. They know, therefore, how many of each kind of job they have done each month (on average), as well as the market potential for the kinds of jobs they do.

image

STANDARD COST ALLOCATIONS (by Job)

In an attempt to leverage what they have learned by capturing data about past performance and, of course, to improve profitability, the partners do an analysis that includes a standard allocation of overhead to each job.

image

From this analysis, they discover that their most profitable jobs are landscaping jobs ($35 per job), followed closely by window cleaning jobs ($30 per job). So, they decide to satisfy the market demand in that order, using the resources they have (1,000 hours of production time).

Before we move on, note that with their present product mix, the company is producing a profit of $4,100 per month ($49,200 per year).

The results of this action are shown here:

image

Upon first glance, it appears that this has been a great move. Based on the calculations in the table, profit has moved from $4,100 per month to $7,200 per month!

Again, the problem is that since NO plumbing or gutter guard jobs were done, some of the overhead (allocated at $90 per job) was not absorbed in the calculations. The total overhead is $18,000 plus $9,000, or $27,000. But the 220 jobs only absorbed 220 times $90, or $19,800 in overhead. That leaves $7,200 in overhead NOT absorbed. Take that $7,200 away from the calculated profit of $7,200 and the company is actually worse off (zero profit) after having reallocated its resources to what appeared to be the “most profitable jobs.”

image


[To be continued—be sure to watch for Part 2!]

clip_image006

clip_image008

25 April 2012

Bad policies hurt the supply chain

I think just about everyone involved with understanding and managing supply chains agree that the supply chain works best when volatility is minimized. Some organizations go to great pain and expense trying to figure out ways to manage their supply chain when faced with sudden demand changes and volatility.

Nevertheless, many supply chain participants continue to maintain policies that actually increase volatility in their own supply chains. Here are some examples:

  1. Short-term promotional pricing
  2. Volume discounts linked to shipment batches
  3. Period-end promotions
  4. Salesperson incentives linked to period-end dates

Short-term promotional pricing

Short-term price promotions contribute to the bullwhip effect and create tremendous inefficiencies all up and down the supply chain. The policy—especially when repeated with some frequency—causes buyers to hoard product. They buy extra-large batches of product when “on sale,” and store it up against the days when the product is not “on sale.”

Some short-term promotions are so predictable that buyers actually delay purchases at regular prices knowing that, if they wait, they can buy at a lower price later.

By the time all of the costs and expenses to the supply chain are added up, it would be difficult—in most cases—to prove that short-term promotional pricing actually adds to the bottom line at all. In fact, studies by some firms specializing in creating and managing pricing mechanisms have shown that consistent pricing at a marginally lower level actually produces more sales and profits than higher prices accompanied by short-term promotions.

Consider a brand like Wal-Mart. This is a firm that has master-crafted its supply chain and built its reputation on consistently lower prices. By doing so, it has—over the last several decades—supplanted previously known giants in the retail industry such as Sears, Penney’s, Kmart and more. Yet, Wal-Mart is not known for “sales” (i.e., short-term promotions). It is known for its consistently lower prices.

Volume Discounts linked to shipment batches

Let me say, off the bat, that there is nothing wrong with volume discounts, per se. The problem is linking volume discounts to transfer batches. In order to realize volume discounts without causing supply chain hoarding and needless volatility, the volume discounts should be separated from the shipment batch.

For example, your customer might get a volume discount if they agree to buy 100,000 units next year, but you might agree to transfer them to them in relatively equal weekly or monthly shipments. This evens out production (on the supply end), warehousing (on the receiving end), and doesn’t make it look like someone sold 50,000 units in March and another 50,000 units in September with little or no activity between.

Period-End Promotions and Salesperson Incentives linked to Period-End Dates

These two are frequently related. Salespeople with the need to reach certain goals for end-of-quarter or end-of-year sales, in order to boost their commissions, begin a big push. This push is usually accompanied by some authority to also offer special discounts.

All up and down the supply chain, prices are being discounted, volatility is being recklessly increased, and, all the while, production lines and warehouses are increasing their operating expenses to meet the boost in demand. Overtime and extra staffing costs are eating up the lion’s share of the profits that might otherwise have been generated if volatility had been reduced, rather than increased, by rational policies.

 

Of course, there are other wrong-headed policies that needlessly lead to higher volatility in our supply chains, but these are a few that come to mind. These are things well within the span of control of executives and managers where corrective action is easy and at little or no cost. It just take rethinking the way we do business and not being afraid to gore some existing “sacred cows.”

20 April 2012

Understanding the “chain” in supply chain management

After 30 years of growth and development, I am not at all certain that I would rename “supply chain management” to anything else. What I might try to do is to get people to recognize the real implications of the name it already has.

Let's look at that key middle word in the name: "chain."

Very few organization "manage" the supply chain as a "chain."

A great many managers and executives are content to manage only their "link" in the chain. If things don't go well, they may try to substitute one connected link for another (e.g., change vendors or find new customers, for example). But they do not recognize or manage the chain as a chain. They still manage pretty much within the four walls of their own "link" (i.e., company).

The important thing to understand about a "chain" is the interdependence of the links and that the strength of the entire chain is governed entirely by the strength of the weakest link in the chain.

Chain.png

The interdependence of a chain should drive organizations inexorably toward supply chain collaboration and, even further, toward a genuine mutuality. In many cases, the fastest, best and most secure way for organizations to improve their own profitability is to work together with other supply chain participants to strengthen the weakest link in the chain—not seeking to replace that link. That means that all the participants in the supply chain—or at least the strategic links—must be (or become) open to collaboration and even invite new ideas from other participants in the chain.

Collaboration and end-to-end data sharing can help end the damaging effects of "the bullwhip," help firms in the supply chain break their frequently misguided addiction to large batch sizes, and help redefine purchasing and pricing metrics that can lead to more frequent replenishment while holding both truly variable costs and operating expenses low for all the participants.

High-level meetings should be sought between executives and managers for all the critical players in the supply chain. The healthiest supply chains are those where all the participants are making satisfactory profits and a few strong players in the supply chain are not using their leverage to increase profits through policies that weaken other important links in the chain.

How can you tell when your "supply chain management" team is beginning to act like they are part of a "chain" and not just content to manage their own "link"? Look for the following signs:

  1. Metrics and actions taken for improvement reach outside "our link" and efforts are made to optimize the "whole chain" by identifying and seeking to strengthen the weakest link.
  2. Management up and down the supply chain have learned to not ignore the industry's larger ecosystem. They monitor the ecosystem for signs of impending change, manage proactively, and share information freely.
  3. Supply chain managers recognize that there will always be a "weakest link" and, while seeking to strengthen the present "weakest link," learn to pace the flow of products by the "drum" of the present "weakest link." They also recognize that any loss of productivity at the present "weakest link" is productivity lost to the whole supply chain. (As a corollary, supply chain managers should recognize that time, energy and money spent strengthening links other than the present "weakest link" will not improve the performance of the "chain.")
  4. Managers and executives involved in the supply chain have ceased using metrics stuck in "cost-world" thinking and have seen that it is synchronizing product flow and increasing throughput that lead to ongoing improvement and higher profits.
  5. Supply chain managers have recognized that profits depend upon meeting customers' needs and demands, and that understanding these needs and demands is essential from product design forward through all the processes and links in the supply chain.
  6. Collaboration across the supply chain begins with product design so that maximum external variety (end-products) can be achieved with minimal internal variety (raw materials, components and subassemblies).
  7. Supply chain collaboration is leading to strategic flexibility in both products and the processes of maintaining supply chain flows.
  8. Wherever possible, all along the supply chain, the flow of product is buffered with capacity rather than inventory. (Supply chain partners may make strategic capital investments in other parts of the supply chain to build needed capacities as part of the collaboration.)
  9. Managers and executives involved in the supply chain have made it a priority to develop strategic alliances and partnerships all along the supply chain in order to recognize and strengthen the present "weakest link."
  10. All across the supply chain, metrics focus on increasing throughput (not cutting costs).
  11. Forecasts are still used for planning, but "pull" is used to drive all execution in the supply chain.
  12. The focus is now on synchronizing the flow of product across the supply chain, not on balancing supply chain capacities.

ONE ADDITIONAL NOTE:

On the contrary side, some "big dogs" (or "big dog" wannabees) in the supply chain think they are managing "the chain," but they treat it more like a "leash." They yank their smaller suppliers around until their suppliers are either driven out of business or simply won't do business with the "big dogs" at all any more.

This kind of attitude is bad for business and bad for the economy in general. The best suppliers are profitable suppliers. If any organization is destroying the supply chain's profitability one link at a time, it is destroying its supply chain by weakening one link after another. These weak links will not have reserve capacities to respond to changes in demand or make up for supply chain losses when "Murphy" strikes.

P.S. - I was going to write on the other words (i.e., "supply" and "management"), but this is probably enough for now. Thanks.

15 March 2012

Increased supply chain confidence through simplicity

Traditional approaches to inventory management and replenishment divide inventory stocks into two portions:

  1. Working stock – the inventories designed to cover daily demand
  2. Safety stock – the inventory quantities designed to cover variation in supply or demand or both

ToC Distr Trad IM View

Years of statistical analytics and software development have been focused on improving the ways in which lead-time, demand and safety stock values are calculated. So much, in fact, that most of the people who use supply chain management, inventory management, or replenishment software frequently do not even understand what the software is doing, how it is doing it, or why it works or does not work.

Some years ago I was consulting a firm and, in the course of the business, reviewing how they went about their inventory management and replenishment. They had software that did inventory management and that included replenishment calculations.

So, we were sitting together and he was describing to me what he was doing on his computer. He said, “Here’s the ordering screen. It shows historical demand here [pointing], and the recommended order quantity here [again, pointing]. And, I don’t know exactly what this number is for [pointing], but if I think the system is suggesting that I buy too much or two little, I can adjust this number until the suggested order quantity lines up with what I think it ought to be.”

Well, of course, what the system was doing was exponential-smoothing of demand and the value he was adjusting was the value of alpha in the formula.

What I refrained from asking him (only by biting my tongue) was, “If you are going to simply adjust the system’s findings to your intuition, why use the system at all?”

The moral is: Systems that are not understood—and most complex systems are not understood—are also not trusted. Especially if they frequently—or even, regularly—produce what are perceived to be unreliable results.

The artificial divide

The artificial subdividing of stock quantities into “working stock” versus “safety stock,” and adding complexities around the factors used to calculate the one value versus the other provides no added value. In fact, the complexity actually leads to less reliability because the users frequently do not know how to set the input parameters effectively. Not to mention the fact that the parameters that are effective today may not—in fact, likely will not—be effective tomorrow or next week.

The fact of the matter is, in most cases, the only awareness of the division between “working stock” and “safety stock” quantities is found in the software itself and those that may be intimately acquainted with the software and its configuration. The people on the warehouse floor typically do not know when they have made an incursion into “safety stock.” They don’t know that the first 41 units they picked for order number 8789089 were from “working stock,” and the last nine units were taken from “safety stock.” And, they should not care.

Even the managers frequently have no visual signal that an incursion has been made into “safety stock.”

Inherent simplicity

ToC Distr DBM IM View

Employing Theory of Constraints (ToC) Dynamic Buffer Management (DBM) makes life easier to understand for those responsible for inventory management and replenishment (read: supply chain managers). The buffer size (for any given item in any given stocking location) is a single number. (Let’s say, 1,000 units.)

The formula for setting the initial buffer size is simple and easily understood. Typically that formula is something like this:

Initial Buffer Qty = [Average Daily Demand] * [ToC Replenishment Days] * [2] * [Paranoia Factor]

The only factor that really needs any kind of explanation is the “Paranoia Factor.” This is merely a multiplier selected by intuition and based on senses of the criticality of an item. An item might be critical because it is used in the production of 800 other items; or because the majority of your customers all buy this item; or because one hugely important customer relies upon you for this item; or dozens of other reasons.

Once the initial buffer size has been calculated and set, the buffer is divided (mathematically) into three “zones.” The top third is called the green zone, the middle third is called the yellow zone, and the bottom third is called the red zone.

Going forward, the DBM system simply monitors for conditions at each replenishment cycle and adjusts the buffer size according to rules. The rules are typically:

  1. Too Much Green – The item has been found in the green zone on three consecutive replenishment cycles; therefore, reduce the buffer size by one-third.
  2. Too Much Red – The item has been found in the red zone on two consecutive replenishment cycles; therefore, increase the buffer size by one-third.

It’s that simple. No complex formulas for calculating and managing variability in demand or supply.

On top of that, supply chain managers can have simple visual signals as to the status of their buffers. A simple view of the inventory data (by location) can readily provide red light, yellow light, and green light indicators for the buffer status in any stocking location for any item. No math and easy to equate to action:

  • Green light – no action required
  • Yellow light – take note, perhaps investigate critical factors like larger-than-normal orders or orders pending for critical customers
  • Red light – consider expediting measures, if necessary


NOTE: There are more options available with DBM, such as identifying and managing SDCs (sudden demand change items—like seasonality), managing Virtual Buffers (between stocking locations, such as warehouse-to-warehouse replenishment, or broader supply chain visibility and collaboration). It is not the intent of this article to exhaust the applicability of DBM.


RKL eSolutions, LLC is in the process building a cloud-based solution to help you manage your inventory in just such a way—using Dynamic Buffer Management and the Theory of Constraints. Contact me or fill out the contact form here if you would like more information.

12 March 2012

The biggest supply chain management mistake over the last 30 years?

I would have to say that the biggest mistake made in SCM over the last 30 (or more) years is the industry’s reliance upon forecasting.

  1. Forecasts are virtually always wrong. They may be wrong by a little bit, or they may be wrong by a lot. But they are--for all practical purposes--always wrong. The forecast may be wrong and you have too much inventory--which your firm may call "good' ("Great job! We didn't have an out-of-stock.") or it may call it "bad" ("Hey! Wake up! We are holding too much inventory!"). The forecast may also be wrong and you have too little inventory, which (again) management may call either "good" ("Great job! We sold out of that!") or "bad" ("Hey! Wake up! We lost sales on that because we ran out of stock!").
  2. Forecasts only lead to one of two conditions: over-stocks and out-of-stocks.
  3. Forecasts offer no assurances of being responsive to the market.

Personally, I believe that if the industry had spent as much time, effort and money on increasing replenishment frequency (reducing lead-time), improving supply chain visibility (end-to-end), making inventory management more agile (providing rapid response to changes in end-user demand) and better understanding and management of sudden demand changes (seasonality and similar events) there'd be a more sales, lower prices, reduced obsolescence and happier supply chain managers everywhere today.

Replenishment frequency

Both Lean and Theory of Constraints management have certainly taught us that replenishment cycles should be as short as possible. One-for-one replenishment is ideal. But short of that, daily is better than weekly; weekly is better than every two weeks; and so forth. When the costs of obsolescence, lost sales, lost customers (due to lost sales), marketing costs required to recover for lost customers, and the many other costs associated with out-of-stocks (on the most popular times) and over-stocks (on the "dogs") if find it hard to believe that most organizations would not perform better with more agile suppliers and logistics even if the so-called "cost of goods" might be marginally higher. Correct valuation of Throughput certainly should teach us that lesson in many, many cases.

End-to-end supply chain visibility

One of the things wrong with today's supply chain is that the manufacturers actually believe that they have made a "sale" when then they sell the product to the distributor. In turn, the distributors believe that they have made a sale when they unload some product on a wholesaler--and so forth on down the supply chain.

The truth is, until the end-user has made a purchase, all the other "sales" have simply put inventory into the supply chain. Inventory that will become obsolete or eat demand for newly-introduced products when liquidated at "discounted" prices. Either way, it's bad for profits in the supply chain.

Imagine how much better it would be if the manufacture (in Malaysia, or wherever) knew within 24 hours precisely how many finished goods were being purchased by end-users every single day. They would know how to pace their production and manage their inventory buffers--as would everyone else in the supply chain!

Inventory management agility

Instead of setting inventory policy once a year, or even several times a year, systems should dynamically adjust for changes in demand (via supply chain visibility) constantly. And, instead of complexity and hard-to-understand formulas, inventory managers should be able to respond to simple visual signals indicating the condition of inventory in their direct control--as well as signals coming from across the supply chain.

Managing sudden demand changes

Supply chain systems should be able to rapidly analyze historical data and identify SDC (sudden demand change) items by simple rules. The systems should then help the supply chain managers understand how to manage build-ups and build-downs for SDC items based on the supply chain production capacities for each item or group of items.

Personally, I think time, energy and money spent in these areas--some of which is now happening--would do a "world" of good (pun intended).


What do you think?

26 January 2012

Sage ERP X3 becomes Oracle Database Ready

Business software vendor, Sage Business SolutionsERP X3 solution has been granted Oracle Database Ready status through the Oracle Partner Network (OPN).

The announcement means that Sage has tested and supports ERP X3 on Oracle Database 11g Release 2, and at extension, Oracle Database Appliance. Results demonstrated smooth installation of ERP X3 application databases on Database Appliance.

The Oracle Database 11g Release 2 offers Sage industry leading performance, reliability, and scalability to power business critical applications.

Customer benefits of the solution are cost-effectiveness by lowering storage usage, reduced administration tasks, and enabling consolidation onto secure private database cloud environments.

Sage ERP X3 V6.3 is available on Oracle Database 11g Release 2.


As reported at ARNet.

25 January 2012

Consider the possibilities (especially now, in these challenging times)

A recent survey of published results by manufacturing and service companies[1] that have applied constraint management methods effectively shows:

[1] Mabin, Victoria J. and Steven J. Balderstone, The World of the Theory of Constraints: A Review of the International Literature, St. Lucie Press, Boca Raton, FL, 2000

[Excerpt from Schragenheim, Eli and H. William Dettmer, Manufacturing at Warp SpeedOptimizing Supply Chain Financial Performance, St. Lucie Press, Boca Raton, FL, 2001]


If you would like help getting started with apply constrain management to your business for rapid ROI and ongoing improvement, please contact me. Find me on LinkedIn.