WHY BE CONCERNED ABOUT TRANSformer efficiency? A transformer is the most efficient a-c apparatus we have-far more efficient than any motor of similar power handling capability (see Figure 1).
Still, efficiency does matter, for several reasons. Transformers are found throughout any power system. A typical residential neighborhood may contain hundreds of small, relatively inefficient motors, most of which operate only a few hours a month. In contrast, the same area may contain several 15 to 50 kVA transformers (like Figure 2) that are continuously energized. Energy losses occur not only within those units, but in the associated upstream circuits, even when the transformers are lightly loaded.
When higher motor efficiency was becoming an issue after 1973, policy makers often pointed out that two-thirds of all electrical energy generated in the U,S. was "consumed"-or supplied to-motors. For transformers, however, the figure is almost 100%. Except for some isolated onsite generation, all electrical energy passes through at least one transformer, and often several.
Transformer and motor usages differ in other significant ways. During the 1970's and 1980's, prior to any legislation on the subject, manufacturers and the U.S. Department of Energy (DOE) frequently compared operating costs and performance goals for higher efficiency motors with either "industry average" or "typical" or "standard" designs.
Then, as now, such terms meant little. For one thing, a true industry average could only be derived by combining stated performance levels with the number of motors built at each such level. Nothing of the kind was ever done. In addition, no basis existed for assuming that published or quoted efficiencies from one manufacturer rested on the same supporting data as those from another manufacturer.
Such comparisons sometimes included a third category: so-called high efficiency or premium efficiency designs. Neither term was ever defined. Today, only "premium" has significance as descriptive of the "NEMA Premium" product line.
Meaningless terms for transformers
Similar comparisons are now being made for transformers. They are equally meaningless. For example, a 2004 IEEE publication compared liquid-filled distribution transformer efficiencies tabulated in NEMA Standard TP 1 with those of "typical standard" and "typical premium" products (Table I). (The NEMA document, first issued in 1996, covered performance of units from 10 to 2500 kVA, single- or three-phase, primary voltage through 34.5 kV, secondary 600 volts or less; Table I.)
Whereas the 1992 Energy Policy Act (EPACT) prescribes efficiency values to be met by integral horsepower polyphase motors of certain types and sizes, it includes no such specifics for transformers. section 124 says only that:
"The secretary [of the DOEj shall, within 30 months after the date of enactment . . . prescribe testing requirements for those . . . distribution transformers for which the secretary makes a determination that energy conservation standards would be technologically feasible and economically justified, and would result in significant energy savings land] within 18 months after the date on which testing requirements are prescribed . . . prescribe, by rule, energy conservation standards. . . '.'
After yet another six months, the secretary was to "prescribe labeling requirements for such . . . transformers. . . ."
The same section of EPACT further mandates that:
"The secretary shall evaluate the practicability, cost effectiveness, and potential energy savings of replacing, or upgrading components of, existing utility distribution transformers during routine maintenance and, not later than 18 months after enactment [of this Act] report the findings . . . to the Congress with recommendations on how such energy savings, if any, could be achieved."
Rather than 30 months, a dozen years have passed since EPACT's adoption, yet we still have no nationwide efficiency standards for distribution transformers. That does not mean, however, that the DOE has let the matter drop. Far from it. Intensive investigation since 1995 has resulted in hundreds of pages of technical and economic conclusions and recommendations in the Federal Register and other public documents. (see the accompanying box for a list of major developments).
The first step toward creation of efficiency standards was to determine how much could be expected from changes in design and manufacturing. To begin that process, the DOE called on Oak Ridge National Laboratory. In the ORNL 1996 report (see box), each possibility was examined for practicality, and to see how it might affect public health, safety, factory employment, and manufacturing facilities, in light of the cost of labor and capital.
Except for large substation units equipped with accessories requiring their own energy (such as forced cooling fans), losses in a transformer are simpler in nature than in a rotating machine. Instead of five loss components, including the difficult-to-quantify stray load loss, the transformer exhibits only two: copper, or I^sup 2^R, loss in the windings, and no-load, or core, loss. The sum of the two is considered load loss (more properly, total loss; the winding loss may then be referred to as "load loss").
A small "dielectric loss" within the insulation does exist, but most technologists consider it too small for significance. Of six textbooks on transformer theory and design, only one even mentions that item, its magnitude, or methods of calculating or testing it. One text says that "a small dielectric loss [occurs] in the insulation [and is] usually negligible." In some units a "stray loss" can also occur because of magnetic field impingement on structural parts; this, too, is normally neglected.
How much loss variation inherently results from manufacturing variations? No extensive studies appear to have been made. The influences of bearing lubricant, air gap uniformity, and die-cast rotor cage integrity are non-existent for transformers. But if losses of only 1% or less are to be mandated by legislation, tight tolerances will have to be justified.
Design changes to reduce transformer losses, just as in a motor, always involve tradeoffs. For example, consider varying the cross-sectional area of the transformer core. An increase tends to lower no-load loss while raising the winding loss. An increase in volts per turn reduces winding loss while increasing the core loss. Variation in conductor area and in the electric and magnetic circuit path lengths will affect efficiency in various ways, always leading the designer to seek a cost-effective balance.
To raise transformer efficiency, core loss has probably drawn the most attention. Core construction permits two important energy-saving features not applicable to industrial motors. First, the inherent colinearity between lamination orientation and the magnetic field direction allows use of grain oriented steel for transformer laminations. That greatly reduces hysteresis loss in the core-the energy required to cyclically realign the "molecular magnets" within the steel, which are randomly positioned in a non-oriented material.
Second, because laminations are sheared or slit in strips rather than being punched with slots, much thinner material can be used in a transformer core than in a rotating machine. Whereas motor laminations are usually 0.014 to 0.025 inch thick, transformer lamination thickness may be as low as 0.006, with 0.009 to 0.012 being common. That lowers eddy current loss.
A further improvement appearing during the 1980's is amorphous core material. Resembling glass more than steel, this lamination material contains no granular structure at all. Laminations only 0.001 inch thick were used in the first mass-produced distribution transformers (25 kVA) manufactured by Westinghouse in 1986. Many similar units have been put in service since then, along with some large power transformers. Typical core loss in such a transformer is only one-third of that in a conventional unit.
Manufacturing the steel, producing the laminations, and assembling the core all require special production techniques that took much development time and effort. The units are more expensive. Payback on the added investment may take three to five years, whereas industry today prefers one to two years. Utilities are also hindered from capital investment by the ramifications (often unforeseen) of changes in the regulatory environment, as has been made clear by such recent occurrences as the Blackout of 2003.
Nevertheless, transformer efficiency remains important, and for the larger units was being closely watched long before any measures were taken to greatly improve motor efficiency. Evaluating transformer losses must take account of several operating conditions, of course. The load on the unit will determine the I^sup 2^R loss in both primary and secondary windings. At no load, secondary winding current will be zero, but primary loss and upstream circuit loss will still be present. Hence, when purchasing a substation transformer, an electric utility will typically evaluate the cost of losses depending upon unit location. At or near a generating station, "line loss" assumes little importance; load loss predominates. For a transformer miles away on the system, primary line loss becomes more important. The resulting economic balance between internal and external loss, and between no-load and load conditions, results in a different financial evaluation of the same transformer performance.
Many reports have been issued by the DOE, consultants, utility groups, and others, containing estimates of the huge amounts of energy that could be saved if all motors were replaced by more efficient units. Usually overlooked is the time such replacement would require, considering the often quoted "typical" motor life span of 15-20 years. Distribution transformers enjoy much longer lives (25 to 30 years, says the DOE in one place, 32 years in another). Wholesale replacement is out of the question for these reliable units. Subject to occasional overload, but with no moving parts, no lubrication problem, and little risk of internal contamination, such transformers can far outlast motors. That, plus their already high efficiency, tends to make investment in transformer performance improvement more difficult to recover. Consequently, the DOE has carefully examined the long-term financial influence of transformer efficiency requirements, projecting future unit shipments over the years 2007-2035.
The 1997 supplement to the ORNL report reviewed the NEMA TP 1 transformer efficiency standards that had been issued the previous year. Further studies, a workshop, meetings with manufacturers, public comment review, and information gathering occurred throughout the next several years.
Efficiency goals then had to be assessed in light of expected electricity costs and other market influences. All that comes under the EPACT requirement that energy conservation standards be "economically justified."
Where the DOE's headed
Resulting from all that investigation was the DOE's issuance in July 2004 of its "Proposed Rules" for assigning specific transformer efficiency values, and for the test procedures to be followed in verifying performance. As with motors, both "compliance" and "enforcement" procedures are contemplated for transformer production. The statistical basis is similar to what was developed for motors (see "Finally-The DOE's 'final rule' on motor efficiency," in EA February 2000). Here, we won't attempt to go into the complexities of standard deviations, sample sizes, rejection rates, or "expanded uncertainty."
To understand where the DOE is heading, we need to first define just which transformers (and what performance conditions) are involved. A comparison with motor efficiency standards is useful here. For motors, universal industry practice is to publish efficiencies at full-load or nameplate horsepower. Test procedures, some catalog data, and common practice also included efficiencies at one-half and three-quarters load.
As attention focussed on energy efficiency after the 1970's, we've recognized that most motors operate much of the time at around two-thirds of rated output. However, like so much else about motor operation, "that depends." Many authorities continue to recommend "right-sizing" motors to more nearly match their ratings with what the driven machine requires.
Whatever the merits of that argument, such a match reinforces the desirability of knowing motor efficiency at rated load. Besides, the variation of motor losses with output is such that efficiency at 75% load normally equals or slightly exceeds the value at full load.
Such a "flat" efficiency curve is characteristic of transformers as well- see Figure 3. Although we may think of a typical distribution unit as always fully loaded, because it's never off-line, the studies leading up to the DOF, Proposed Rules have shown that low-voltage transformers are often subject to average loading of only 16% of rating, whereas most utility system units (medium voltage) are loaded to 20%-30% of rating.
In setting its own efficiency testing standards for distribution transformers, NEMA (in Standard TP 2-1998) specified load loss determination at 50% of rating for liquidfilled units, and either 35 or 50% (the range within which unit efficiency is a maximum) "as appropriate" for dry-type units (the lower figure suits "low voltage" transformers; the higher figure applies to "medium voltage"). The DOE has adopted the 50% figure for all transformers.
Efficiency levels, and applicable standards, deal separately with two basic transformer types: "dry" or air-cooled designs, and the liquid-filled type. Although the application lines can be quite blurred, the dry types tend to be used within either commercial or industrial buildings, because they eliminate all concerns of liquid flammability, toxicity, or spills. Utilities, on the other hand, prefer liquid-filled units, mostly used outdoors where those concerns are minimal. Such transformers tend to be better adaptable to peak overloads, are smaller in size, suit the higher voltages, and are somewhat more efficient.
In the creation of EPACT motor efficiency requirements, voltage ratings and even the horsepower range presented little difficulty. Everyone recognized that only low-voltage machines would be involved. The existing NEMA classifications of general-purpose, definite-purpose, and special-purpose also lent themselves readily to legislative categories.
Questions did arise, later, about such derivatives as C-face motors or P-base verticals with thrust bearings. Exactly where to draw the line bounding "general purpose" took some time, eventually resulting in the DOE Final Rule "Tables of Many Common Features or Motor Modifications" (in 10 CFR Part 431, Subpart A, Appendix A), illustrating how variations from standard "general purpose" construction were to be interpreted.
Motor enclosure variations were more easily handled. In the "medium" or "integral horsepower" sizes covered by the legislation, only "open" (dripproof) and "totally enclosed" (TEFC or explosion-proof in general) motors were considered. Such offshoots as weather-protected or water-cooled weren't covered by EPACT.
When the time came to adapt the legislation to transformers, the same steps were necessary, but far more complicated. What exactly was a "distribution transformer"? (That's the only type mentioned in NEMA TP 1-1996.) Does it differ from a "power transformer," Figure 4?)
Any transformer below 500 kVA has often been considered to be of the "distribution" type. However, no standard offers a definition by either kVA rating or voltage. The IEEE-100 Standard Dictionary did not use the term. One of the most recent texts on transformer construction and usage defines a distribution transformer as one that "takes voltage from a primary distribution circuit and 'steps down' or reduces it to a secondary distribution circuit or a consumer's service circuit . . . distribution transformers can have lower ratings [that 5 kVA] and . . . ratings of 5,000 kVA or even higher; so the use of kVA ratings to define transformer types is being discouraged. . . ." Even that isn't entirely clear, because "primary circuit" has no universal meaning.
Similarly, the DOE first defined a distribution transformer as one "designed to continuously transfer electrical energy either single phase or three phase from a primary distribution circuit to a secondary distribution circuit, within a secondary distribution circuit, or to a consumer's service circuit," within certain voltage and kVA limits. That proved unsatisfactory. The terms "primary" and "secondary" as applied to circuits can have various meanings. "Continuously" isn't necessarily applicable.
Hence, the latest "Proposed Rule" reads this way: "Distribution transformer means a transformer with a primary voltage of equal to, or less than, 35 RV; a secondary voltage equal to, or less than, 600 V; a frequency of 55-65 Hz; and a capacity of 10 kVA to 2,500 kVA for liquid-immersed units and 15 kVA to 2500 kVA for dry-type units. . . ." Note the absence of any mention of either transformer function or circuit relationship.
Other types, such as welding transformers, arc furnace transformers, grounding transformers, or autotransformers had to be excluded, just as "special-purpose motors" were excluded. (sec Figure 5.)
Although unimportant at power frequency, "stray loss" can be significant in transformers supplying non-linear loads, but the losses and efficiency in any such unit are no more readily evaluated than motor efficiency in ASD service. The DOE Proposed Rule specifically exempts K-factor, "harmonic mitigating," or drive isolation transformers.
In addition to the recognized construction differences contributing to different loss expectations for liquid and dry types, variation in basic insulation impulse level, or BIL, capability for dry-type units has led to still further differences that the Proposed Rule takes into account. Table II-3 in the July 2004 document defines 13 separate "Engineering Design Lines" involving ten distinct "product classes," for each of which a separate set of minimum efficiency requirements may be created. (see Figure 6.)
Besides the type and kVA ranges cited above, the differences involve BIL, primary voltages (24,940, 14,400, 7,200, 4,160, etc.), and secondary voltage (120, 208, 240, 277, 480, etc.). Thus, despite the greater simplicity in basic construction and in internal loss distribution, these transformers are likely to be subject to much more complex efficiency standards than those for EPACT motors.
To give a rough idea of the number and types of these transformers involved in the DOE's rule making: during the year 2001, 1,374,366 units were shipped, of which 77%-more than a million units-were liquid-filled. Of the dry-type remainder, nearly 300,000 were low-voltage single-phase designs.
Rather than including a table of specific efficiencies for each transformer category, the Proposed Rule sets forth a range of efficiency values for a representative rating in each of the 13 design lines. Five values of efficiency were examined for the sample rating in each line. For example, Design Line 3 includes liquid-filled, medium-voltage, single-phase ratings from 167 to 833 kVA. The unit studied was rated 500 kVA, 65
The lowest efficiency, described as CSL, or Candidate Standard Level 1, was the value in NEMA TP 1-for this example, 99.3%. The highest efficiency (99.75%, involving losses 10% greater than "the most efficient design identified in the engineering analysis") was "Among the most efficient transformers" available in that design line. The three intermediate levels (99.4, 99.6, 99.7) were arbitrarily spaced about equally between those extremes. Complex cost/benefit calculations were then applied to each level. "Stakeholder feedback" will guide the DOE in setting final limits.
Whatever limits are eventually selected, test precision will be at least as important for transformers as for EPACT-regulated motors. Fortunately, transformer efficiency testing-like internal losses themselves-is much simpler than the IEEE 112 procedure for motors. Existing test methods are found in IEEE Standards C57.12.90, for liquid-filled units, and IEEE C57.12.91 for dry-type transformers (the latter are also dealt with in NEMA Standard ST 20).
For example, at no load, in the absence of friction and windage associated with rotation, only a single set of measurements is needed, at rated voltage only. A typical circuit is shown in Figure 7(a). Voltage applied to the low-voltage winding, with the high-voltage winding open-circuited, is adjusted until the meter V^sub 1^ (reading average voltage) indicates the rated value for the energized winding. The second voltmeter V^sub 2^ gives a true RMS reading, which any supply voltage harmonics will cause to differ from the average value. The wattmeter reading of no-load loss in the transformer is then adjusted by a formula in the test code to compensate for the difference.
To arrive at the winding loss associated with load current, the test circuit of Figure 7(b) is used. Here, the high-voltage winding is energized with the low-voltage winding shorted. Supply voltage is adjusted to produce rated current in the energized winding. Rated amperes will then appear in the short-circuited winding as well, so that the wattmeter reading represents the so-called load loss. Again, only a single measurement is needed, in contrast to the series of six or more readings at different loads that are taken during a load test on a motor.
Unlike the a-c motor, the transformer needs no external machine to act as a loading device. Hence, transformer test procedures do not include the variety found in IEEE 112, such as the back-to-back (duplicate machines) test or the dynamometer test.
No standards covering test precision
A final major difference between motor and transformer testing is that no attempt has been made to fully standardize the relative precision of the transformer test procedure. No tables of "nominal" and corresponding "minimum" or guaranteed efficiencies have been arrived at. No EPACT standards exist.
The motor efficiencies that became EPACT requirements had been worked out beforehand by NEMA. Similarly, the starting point for the DOE's distribution transformer efficiency standards were in part developed by NEMA earlier, in TP 1-1996. (see the accompanying box.)
During the several years prior to passage of EPACT, several states passed their own laws regulating motor efficiency. The same has been true for transformers, with California, Massachusetts, Minnesota, and New York all requiring compliance with NEMA TP 1 between 1999 and 2003. Even municipal ordinances have been enacted. Once the DOE rules are in force, all such local laws will be superseded.
The language of TP 1 was somewhat ambiguous. Since it was titled a "guide," rather than a "standard," one would expect its content to be suggestive rather than prescriptive. Much of the text does read that way, with such phrasing as "may" or "could be," while other sections use the mandatory "shall."
In TP 1, "minimum" efficiency values are tabulated for transformer ratings defined as "Class 1" units. They range from just under 97% to 99.4. A tolerance is allowed, in these words: "A transformer is considered to be within an acceptable tolerance of the nameplate efficiency if the no load and load losses meet the single unit tolerances defined in ANSI standards C57.12.00 or C57.12.01." Further, the guide defines "nameplate efficiency" as "Rounded efficiency to 1/10 of 1%."
In its consideration of loss evaluation on which motor efficiency standards are based, NEMA adopted either 10% or 20% variations in loss measurement as the smallest reasonable steps between one stated (published or nameplate) efficiency value and the next one above or below it. The simpler methods of evaluating transformer losses, and the higher efficiencies involved, have led to much tighter tolerances.
Those are "plus tolerances"-the amount by which actual losses can exceed those associated with nameplate efficiency. The IEEE/ANSI standards just cited prescribe a noload loss tolerance of 10%, and total loss tolerance of 6%. Those same tolerances were adopted in NEMA TP 2-1998 governing transformer testing, and they appear also in NEMA ST 20-1992, "Dry Type Transformers for General Applications."
Reconciling all these small variations is challenging. For example, consider a unit said to exhibit 99% efficiency. Actual total loss is 0.0101 per unit. If the measured loss exceeds that by 6%, the resulting efficiency is 0.9894; rounded to 1/10 of one percent, that becomes 0.99. The permissible loss variation thus makes no difference to the stated efficiency. Setting up the kind of "nominal" and "minimum" efficiency ranges established for motors would be futile.
The NEMA TP 2 test methods allow dry-type transformers to be considered as meeting the "specified" efficiency as long as the tested losses of any individual unit don't exceed the "allowed" losses by "more than 8%." That figure was derived from the ANSI/IEEE tolerances on no-load and total losses, adjusted for transformer operation at 35% or 50% of rated load rather than 100%.
After reviewing all that, the DOE issued its second July 2004 Proposed Rule, within a new Part 432 of the 10 CFR regulations, dealing with transformer test procedures. The basic approach of TP 2 was adopted, including measurement accuracy limits of ±3% for losses, ±1°C in temperature, and ±½% each for measured voltage, frequency, current, and resistance.
Although the DOE accepted the basics of NEMA TP 2 (tolerances on losses themselves, not on the efficiency value as such) and related IEEE standards, the Department saw several shortcomings-such as insufficient detail on resistance and temperature measurements, improper phase angle correction, and inadequacies in load loss calculation.
The DOE was also not satisfied with the sampling plan set forth in NEMA TP 2 for demonstrating compliance with efficiency standards. It appeared to require either 100% product testing, or statistically valid sampling of at least five units every month. "Reluctant" to impose that burden on manufacturers, the DOE has instead suggested a sampling plan more like that adopted for motors. An option (also like that available for motors) offers "alternative methods" of verifying performance other than actual testing.
What about the effect of repairs on transformer efficiency? This was a concern for motors and ultimately resulted in creation of the ANSI/EASA AR 100 repair standard. However, the DOE never attempted to impose Federal standards on the repair process. That is even less likely for transformers, most of which do not require oven stripping for rewind. Furthermore, although rewinding can increase efficiency of some older motors, that benefit is less common for transformers (Figure 8).
Thus, although the nature and evaluation of losses is much simpler for transformers than for motors, arriving at efficiency standards is a far more complex task. Whereas the DOE accepted an existing set of NEMA standard values for a-c motor efficiency, and an existing IEEE test standard as well, that has not been true for transformers. For several reasons, NEMA's published transformer efficiency standards (in TP 1) have served only as a basis for further intensive investigation by the DOE. The July 2004 "Proposed Rules" for setting federal standards, 43 pages long, is extremely wide-ranging. It does not include a simple table of rating-by-rating minimum efficiencies, but rather sets up a complex basis for eventually arriving at such values.
Public comment on both efficiency and test standards was solicited by the DOE until November 2004. However, as of January 2005, the pertinent web site had not been updated since October. The process may be far from complete.
© 2005 Barks Publications Provided by ProQuest LLC. All Rights Reserved.
Source: Electrical Apparatus