Uploaded by User25584

Nonprofit Organization Financial Performance Measurement

advertisement
Nonprofit Organization
Financial Performance
Measurement
An Evaluation of New and Existing
Financial Performance Measures
William J. Ritchie,
Robert W. Kolodinsky
Consensus about financial performance measurement remains
elusive for nonprofit organization (NPO) researchers and practitioners alike, due in part to an overall lack of empirical tests
of existing and new measures. The purpose of the current study
was to explore potential similarities of financial performance
measures derived from two sources: current NPO research and
key informant interviews with NPO foundation constituencies.
The authors examined financial performance measurement
ratios with data from fifteen Internal Revenue Service (IRS)
Form 990 line items. Using factor analytic techniques, they
found three performance factors, each with two associated financial measurement ratios, to be present. They categorized the
performance factors as fundraising efficiency, public support,
and fiscal performance. This article discusses implications of
the findings and future research.
N
(NPOs) provide important services
throughout the United States and beyond, but the degree to
which such organizations are effective remains a muchdebated topic (Herman and Renz, 1999; Jackson and Holland, 1998;
Murray and Tassie, 1994; Kanter and Summers, 1987). Although
NPO stakeholders are vitally interested in seeing their organizations
perform optimally, agreement about NPO financial performance measurement and overall performance evaluation has remained elusive
to both researchers and practitioners. In particular, a general lack of
convergence of financial performance criteria has contributed to
ONPROFIT ORGANIZATIONS
Note: We thank the National Center for Charitable Statistics for the financial data.
We also are grateful to J. Jeffrey Robison, president, Florida State University
Foundation, for his helpful assistance.
NONPROFIT MANAGEMENT & LEADERSHIP, vol. 13, no. 4, Summer 2003
© Wiley Periodicals, Inc.
367
368
RITCHIE, KOLODINSKY
Although NPO
stakeholders are
vitally interested
in seeing their
organizations
perform
optimally,
agreement about
NPO financial
performance
measurement and
overall
performance
evaluation has
remained elusive
NPOs’ using a confusing array of financial measures (Herman and
Renz, 1998) with little regard for developing means of testing measures to ascertain whether they are distinct from other measures. We
believe that the general lack of empirical testing of financial measures has adversely affected researchers’ confidence in any single set
of measures, owing to the myriad of measures in use today.
This lack of measurement scrutiny regarding NPO financial performance measures poses problems for both researchers and practitioners. The researcher finds it difficult to develop normative
conclusions regarding the activities and attributes of NPOs that lead
to higher (or lower) performance. The practitioner’s difficulties arise
from an inability to effectively assess performance, particularly when
attempting to identify tested measures that enable the comparison
of one’s own organization with that of similar organizations. For
example, daily managerial activities such as determining proper allocation of scarce resources and effectively communicating organizational legitimacy to key organizational stakeholders are likely to be
arduous when those involved cannot agree on performance criteria.
This study’s purpose is to present a process for evaluating
financial performance measures as well as identifying distinct
performance-related categories by examining sixteen financial performance measures. We derived the formulas for the financial measures from two sources: nonprofit literature and key informant
interviews with NPO constituencies. IRS Forms 990 were the source
for financial data. The authors subjected the measurement ratios to
factor analytic techniques using both cross-sectional and longitudinal
data to identify patterns among the measurement ratios. The current
study identified three distinct factors (that is, performance-related
categories) consisting of six of the original sixteen financial measures.
The methodology used in this study serves as a model to evaluate
financial measures for both NPO researchers and practitioners.
Nonprofit Effectiveness and Financial
Performance Measurement
Since the 1980s, both practitioners and researchers have increasingly
turned their attention to the topics of organizational effectiveness
and performance. Despite the apparent importance that organizational stakeholders place on effectiveness, scholars have made little
movement toward creating effective tests and finding consensus of
financial performance measurement, an important component
of assessing overall effectiveness (Cameron, 1986; Kirchoff, 1977).
As an example of the lack of consensus, Cameron (1978, 1986)
found that, even for effectiveness research conducted on for-profit
firms, the majority of measures scholars used did not overlap with
other similar studies.
Lack of measure evaluation and consensus is even more problematic for considering NPOs. A review of the NPO and strategic
NPO F I N A N C I A L P E R F O R M A N C E M E A S U R E M E N T
management literatures fails to reveal a collection of common, distinct
financial ratio categories that are useful for determining firm-level outcomes relative to other similar organizations. For instance, in their study
of NPOs serving the disabled, Green and Griesinger (1996) chose vastly
different measures (for example, goal attainment) than did Siciliano
(1997; for example, system resources) in the analysis of YMCAs.
Although the creation of a wide variety of measures is often justified
(that is, they are an attempt to cope with issues such as variations in
structures, multiple missions, and a host of other unique organizational
characteristics), nevertheless, the net result for researchers is another
measure that applies to only one context. This is problematic because, as
Herman and Renz (1999, p. 122) suggest, the current focus on contextually specific measures runs the risk of “even greater fractionating of
knowledge and incommensurability of theories and findings.” Elsewhere, Herman and Renz suggest that the literature provides minimal
explanation of performance constructs and that “the question of how
to understand and assess the effectiveness of . . . [the NPO] . . . continues to challenge practitioners and scholars alike” (1998, p. 23). These
authors are not alone in their assessment. In a review of the strategic
management and nonprofit literature for the period 1977 to 1992, Stone
and Crittenden (1993, p. 203) found that “Works on strategic management in [NPOs] have just begun to address the thorny topic of performance, a delay due largely to the well-articulated difficulty in designing
appropriate measures . . . [even though] research on performance
within the non-profit context could benefit . . . organizations.” More
recently, Stone, Bigelow, and Crittenden (1999, p. 408) conducted
another synthesis of the strategic management and nonprofit literature
(for 1977 to 1997), concluding that “despite the wealth of research,
performance has received scant attention.”
Two general themes emerge from a review of the recent research
on NPO effectiveness and performance measurement. First, a call
resounds for more research about NPO effectiveness and performance
(for example, Forbes, 1998; Tuckman and Chang, 1998; Herman and
Renz, 1999; Stone, Bigelow, and Crittenden, 1999; Rojas, 2000;
Hoefer, 2000). Second, too few empirical studies suggest methods for
testing new and existing measures to evaluate their uniqueness. Using
financial ratios as a basis, the current study will present a process for
identifying and testing a group of financial measures to ascertain their
relevance and distinctiveness for a homogenous sample of NPOs. Both
the methods and the results of this study serve as a model for
researchers and practitioners to follow when evaluating new and
existing financial performance measures in other NPO industries.
Methods
The analysis of the performance measures was divided into two
phases, an exploratory phase (Phase 1) and an application phase
(Phase 2). Phase 1 involved factor analyses of sixteen financial
369
Using financial
ratios, the
current study will
present a process
for identifying
and testing a
group of financial
measures to
ascertain their
relevance and
distinctiveness for
a homogenous
sample of NPOs
370
RITCHIE, KOLODINSKY
performance ratios using both cross-sectional and longitudinal university foundation data (IRS Form 990 data for fiscal years 1990 to
1995) gathered from the National Center for Charitable Statistics
(NCCS). During this phase, we also used data from a second sample
of foundations, outside the university domain, to control for
potentially significant outside variables that might cloud results. In
Phase 2 we analyzed the measures resulting from Phase 1 using
financial data collected from IRS Forms 990 (for fiscal year 1998 to
1999) for a sample of university foundations.
Phase 1
Sample. We selected the university foundations using the
National Taxonomy of Exempt Entities (NTEE) (specifically NTEE
code B43I for university foundations) developed by NCCS (Urban
Institute, 1998). Initially, they composed financial measurement
ratios using 1990 to 1995 data from fifteen IRS Form 990 line items:
1A, 1D, 5, 8C, 12, 14, 15, 16, 17, 21, 44D, 45, 46, 54, and 59. Only
those foundations with information necessary to complete each of
the sixteen measures were included in the sample. The final sample
consisted of 122 university foundations, out of a total 297 in the
NCCS database.
Measures. The financial performance measures used in the study
were based on information from two sources. The first source was
existing literature. We included in this study two performance measures used by Siciliano (1996, 1997) (see performance measures 3
and 11 in Table 1) as well as two performance measures suggested
by Greenlee and Bukovinsky (1998) (see performance measures 7
and 11 in Table 1). The second source of measures was key informant interviews with university foundation constituencies including
a dean, a foundation president, board members, and the foundation’s
financial management staff. A summary of all sixteen financial performance measures and corresponding IRS Form 990 line item labels
are in Table 1.
The performance category labels used in this study were based
upon the names that Siciliano (1996, 1997) or Greenlee and
Bukovinsky (1998) assigned to their measures. Specifically, the fiscal
performance category is based on Siciliano’s measurement ratio of
total revenue to total expenses. The fundraising efficiency category
includes one of Greenlee and Bukovinsky’s efficiency measures. The
public support category was named for the inclusion of a ratio identical to Siciliano’s index of public support (total contributions divided
by total revenue). Greenlee and Bukovinsky have identified a similar measure for this category. The investment performance category
refers to the inclusion of various combinations of ratios involving
marketable securities. Interviews with key informants served as the
bases for these ratios.
Statistical Analysis. Researchers use factor analysis extensively
as a means to identify patterns in data and as a technique for reducing
NPO F I N A N C I A L P E R F O R M A N C E M E A S U R E M E N T
371
Table 1. Initial Financial Performance Measurement Ratios
and Preliminary Categories
Fiscal Performance
1. Total revenue available for programs divided by total revenue
(line 12 – [line 14 ⫹ line 15 ⫹ line 16]) ⫼ line 12
2. Total revenue divided by total assets
(line 12 ⫼ line 59)
3. Total revenue divided by total expenses (Siciliano, 1996, 1997)
(line 12 ⫼ line 17)
4. (Total revenue minus total expenses) divided by total revenue
(line 12 – line 17) ⫼ line 12
5. (Total revenue minus total expenses) divided by total assets (ROA)
(line 12 – line 17) ⫼ line 59
6. Net assets (fund balances) divided by total assets
(line 21 ⫼ line 59)
Fundraising Efficiency
7. Direct public support divided by fundraising expenses (Greenlee, 1998)
(line 1A ⫼ line 44D)
8. Total revenue divided by fundraising expenses
(line 12 ⫼ line 44D)
Public Support
9. Total contributions (gifts, grants, and other contributions) divided
by total expenses
(line 1D ⫼ line 17)
10. Total contributions (gifts, grants, and other contributions) divided
by total assets
(line 1D ⫼ line 59)
11. Total contributions (gifts, grants, and other contributions) divided
by total revenue (“Index of public support,” Siciliano, 1996; Greenlee,
1998)
(line 1D ⫼ line 12)
12. Direct public support divided by total assets
(line 1A ⫼ line 59)
Investment Performance and Concentration
13. Return on securities divided by total securities
(line 5 ⫼ line 54)
14. Net gain or loss on sale of securities divided by total securities
(line 8c (A) ⫼ line 54)
15. Cash and savings divided by total assets
(line 45 ⫹ line 46) (B) ⫼ line 59 (B)
16. Total securities divided by total assets
(line 54(B) ⫼ line 59 (B))
Note: With IRS Form 990 calculations.
data (Hair, Anderson, Tatham, and Black, 1998). Factor analysis
allows the researcher to analyze a set of variables (for example, ratios
of financial performance measurement) to identify data patterns and
determine the degree to which such variables group together on specific related factors or dimensions (for example, performance-related
categories). The groupings on related dimensions are based in part
upon “loadings,” which represent the level of correlation (usually .40
or greater) (Hatcher and Stepanski, 1994) of a given variable with
that factor and distinctiveness of factors (as identified by proportion
Factor analysis
allows the
researcher to
analyze a set of
variables (for
example, ratios
of financial
performance
measurement) to
identify data
patterns
372
RITCHIE, KOLODINSKY
of explained variance and eigenvalues). Although used extensively
in the literature to identify unique constructs in survey design, precedence for using factor analysis with financial performance measures
also exists in the for-profit literature. For example, in a study of performance measures used in the strategic management literature, Woo
and Willard (1983) factor analyzed fourteen financial variables and
found that the variables loaded on four key factors. In a more recent
study, Tosi, Werner, Katz, and Gomez-Mejia (2000) also applied factor analysis to derive eight performance factors from a variety of
performance variables commonly used in firms.
Results
We conducted an exploratory factor analysis on the university foundation data for the 1995 filing year. Using eigenvalues greater than
1.0, evaluation of scree plots, total explained variance, and factor
loadings greater than .4 as criteria for identifying meaningful factors
(see Nunnally and Bernstein, 1994), we identified six factors (or categories). In order to control for potentially significant outside variables that might result in spurious outcomes, the researchers also
factor analyzed the initial sixteen measures using IRS Form 990 hospital foundation data for the same year and evaluated them for ratio
overlaps between the two foundation types. Of 362 hospital foundations (NTEE code E221) in the database, 101 contained information
on all sixteen measures. Four pairs of measures common to both university and hospital foundations were retained for further analyses.
The criteria for retention of measures for further analysis was that
(1) a given factor must contain at least two measures common to
both organizational types (that is, demonstrate high loadings) and
(2) performance ratios must carry the same sign. Based upon these
criteria, the researchers dropped the public support 2 and investment
concentration categories from the analysis. They conducted an additional factor analysis separately on 1995 university foundation data,
by forcing four factors using these resultant pairs of measures. The
financial measures, performance categories, and factor loadings
are presented in Table 2. Again, using eigenvalues greater than 1.0,
the scree plot, total explained variance, and factor loadings greater
than .4 as criteria for identifying meaningful factors (see Nunnally
and Bernstein, 1994), the four factors explained 92 percent of the total
variance for the university foundations. The results are in Table 3.
In an effort to determine whether these performance categories
were distinct over a period of years, we subjected the four pairs of
measures to factor analysis for each year during the period 1990 to
1995. The results of this analysis revealed that a measure in the
investment performance category was unstable, showing negative
loadings for the years 1991, 1993, and 1994. Further examination of
the data revealed that the variations in foundations recording Form
990 line item 8c (net gain or loss from sale of securities) was confounding the results for measures in this factor. Because of the spurious nature of this line item and its influence on the analysis, the
.94
⫺.11
⫺.01
.14
1.00
.97
.06
⫺.01
⫺.17
.02
.16
.99
.02
⫺.01
.01
.05
.11
.07
.00
.11
⫺.07
.87
.74
.07
.02
⫺.14
.07
.30
⫺.16
.07
.09
.06
⫺.06
⫺.02
⫺.11
⫺.12
.07
⫺.09
⫺.11
⫺.06
.94
.01
⫺.06
⫺.14
.09
.01
.05
⫺.01
.01
.92
.08
.10
⫺.03
.99
⫺.05
.99
.04
.09
Hospital
⫺.08
⫺.08
.01
.00
University
.06
⫺.01
Hospital
Fundraising
Efficiency
.04
⫺.06
⫺.21
⫺.07
.34
.89
.01
.93
.19
⫺.04
⫺.15
⫺.16
⫺.07
.36
.09
.61
.61
.13
.06
1.01
.19
⫺.13
.03
.03
.04
⫺.16
.04
.46
⫺.01
⫺.01
⫺.03
⫺.19
.34
.02
⫺.10
.19
⫺.18
⫺.11
⫺.08
University
.93
.04
.02
.88
.36
⫺.40
⫺.10
.02
.02
.10
⫺.18
⫺.03
.94
⫺.08
.06
.20
Hospital
Public
Support 1
⫺.03
.18
⫺.01
.01
.77
⫺.02
.02
.00
.01
Hospital
Investment
Performance
University
Note: Rotation method: promax with Kaiser normalization. Both rotations converged in six iterations.
Total revenue available for programs
divided by total revenue
Total revenue divided by total assets
Total contributions (gifts, grants, similar
amounts) divided by total assets
Total revenue divided by total expenses
(Total revenue minus total expenses)
divided by total revenue
(Total revenue minus total expenses)
divided by total assets
Total contributions (gifts, grants, similar
amounts) divided by total expenses
Direct public support divided by
fundraising expenses
Total revenue divided by fundraising
expenses
Securities revenues divided by total securities
Net gain or loss on sale of securities
divided by total securities
Net assets (fund balance)
divided by total assets
Total contributions (gifts, grants, similar
amounts) divided by total revenue
Direct public support divided by
total assets
Cash and savings divided by total assets
Total securities divided by total assets
University
Fiscal
Performance
Table 2. Results from Exploratory Factor Analyses
.50
⫺.12
⫺.13
⫺.06
⫺.16
.00
.08
.01
⫺.02
⫺.14
.35
⫺.08
.88
⫺.07
1.03
1.02
University
⫺.02
.10
.05
.06
⫺.01
⫺.18
.01
.06
.04
⫺.04
.32
.98
⫺.05
⫺.01
.97
⫺.02
Hospital
Public
Support 2
374
RITCHIE, KOLODINSKY
Table 3. University Foundation Factor Analyses
Fundraising Public Investment
Fiscal
Efficiency Support Performance Performance
Direct public support divided
by fundraising expenses
Total revenue divided by
fundraising expenses
Total contributions divided
by total revenue
Direct public support divided
by total assets
Net gain or loss divided
by total securities
Securities revenues divided
by total securities
Total revenue divided
by total expenses
Total contributions divided
by total expenses
.99
⫺.01
.01
.03
.99
.03
⫺.01
⫺.02
.00
.94
⫺.02
⫺.04
.02
.90
.03
⫺.02
.00
⫺.04
.94
.04
.00
.05
.94
⫺.04
.02
⫺.25
.00
1.0
⫺.03
.33
.00
.85
Note: Principal component analysis (promax rotation). Rotation converged in five
iterations.
authors dropped the pair of investment performance measures from
the analysis. They replicated the longitudinal analysis with the three
remaining pairs of measures for each year (1990–1995). Each year
revealed three distinct factors. As a final check for the distinct presence of the three categories of performance, the researchers averaged
each of the six financial measures for the six-year period
(1990–1995) and conducted a final factor analysis. Seventy-nine university foundations contained data on all six financial performance
measures for the six-year period, an adequate case-to-variable ratio.
Results are in Table 4. The results of this analysis indicated that the
commonalities on all six variables were above .87, suggesting that a
large portion of variance in each of the given financial performance
Table 4. Results from 1990–1995 Factor Analysis
Fundraising
Efficiency
Total revenue divided by total
fundraising expenses
Direct public support divided
by total fundraising expenses
Total revenue divided by total
organizational expenses
Total contributions divided by
total organizational expenses
Direct public support divided
by total assets
Total contributions divided
by total revenue
Fiscal
Performance
Public
Support
1
⫺.01
⫺.03
.99
0
.03
⫺.01
1.0
⫺.24
.01
.92
.26
0
⫺.18
.94
.02
.16
.90
375
NPO F I N A N C I A L P E R F O R M A N C E M E A S U R E M E N T
measures is accounted for by the respective factors (or categories).
With regard to the model’s cumulative explained variance, the three
factors explained 95 percent of the phenomena of interest, well above
the generally accepted threshold of 70 to 80 percent (see Hatcher and
Stepanski, 1994).
Phase 2 Results
We derived data for Phase 2 from a sample totaling 144 university
foundation IRS Forms 990 for fiscal year 1998 to 1999. The sample
contained 25 percent Carnegie class 1 organizations, 15 percent
Carnegie class 2 organizations, 40 percent Carnegie class 3 organizations, and 15 percent Carnegie class 4 and higher numbered organizations. Of these, 102 foundations contained information on all six
financial performance measures. Using eigenvalues greater than 1.0,
a scree plot, total explained variance, and factor loadings greater than
.4 as criteria for identifying meaningful factors (see Nunnally and
Bernstein, 1994), the researchers found that three factors (or categories) explained 94 percent of the variance. Owing to the distinctiveness of the retained factors, the three retained factors displayed
eigenvalues greater than 1.2, with remaining factors showing eigenvalues of less than .37. Performance categories, variable loadings, and
descriptive statistics are in Table 5.
We derived data
for Phase 2 from
a sample totaling
144 university
foundation IRS
Forms 990 for
fiscal year 1998
to 1999
Summary of Performance Categories
The resulting performance categories can be described as follows.
Fundraising Efficiency. Efficiency measures traditionally consider
ratios relating outputs to inputs (Berman, 1998). Greenlee and
Bukovinsky’s measure (1998) is included in this category and represents total dollars raised relative to monies spent on the fundraising
Table 5. University Foundation Factor Analysis: Resultant Financial Measures
and IRS Form 990 Line Item Labels
Direct public support divided by
fundraising expenses (Greenlee, 1998)
(line 1A ⫼ line 44D)
Total revenue divided by fundraising
expenses (line 12 ⫼ line 44D)
Total contributions divided by total
revenue (Siciliano, 1996; Greenlee, 1998)
(line 1D ⫼ line 12)
Direct public support divided by
total assets (line 1A ⫼ line 59B)
Total revenue divided by total expenses
(Siciliano, 1996, 1997) (line 12 ⫼ line 17)
Total contributions divided by
total expenses (line 1D ⫼ line 17)
Mean
SD
Fundraising
Efficiency
Public
Support
Fiscal
Performance
84
312
.99
.06
.08
121
400
.99
.01
.05
.65
.18
.10
.86
.22
.16
.11
⫺.02
.91
⫺.05
2.54
2.89
.06
.10
.99
1.8
2.9
.07
.20
.98
Note: Principal component analysis (promax rotation). Rotation converged in four iterations.
376
RITCHIE, KOLODINSKY
This study’s
findings help to
illustrate a means
for using factor
analysis on a set
of NPO data to
evaluate and
identify distinct
financial
performance
categories and
respective
measurement
ratios
activities. This factor displayed an eigenvalue of 2.6 and accounted
for 43 percent of the total variance of the three performance categories. The two variables that compose this factor indicated loadings
above .99, suggesting that these measures constitute the majority of
the variance in this factor.
Public Support. The public support category contains variables
relating to fundraising outcomes and is an indicator of an organization’s ability to generate revenue or an index of the public support
for an organization (Siciliano, 1996). Both Siciliano’s measure and
the ratio of direct public support to total assets had high loadings of
.86 and .91, respectively, on this factor. This category displayed an
eigenvalue of 1.2 and explained 21 percent of the variance.
Fiscal Performance. Siciliano (1997) used the ratio of total revenues to total expenses in a study of YMCAs as an indicator of fiscal
performance. According to the current analysis, a second useful measure of fiscal performance is the ratio of total contributions to total
expenses. The two measures in this category had factor loadings
exceeding .97, with an eigenvalue of 1.8 and explaining 30 percent
of the variance among the three performance categories.
Discussion and Conclusions
The results of this study provide evidence for the distinctiveness of
financial performance measures as tested on cross-sectional and longitudinal data. Specifically, we found that six financial performance
measurement ratios representing three performance-related categories
were distinct. These dimensions—fundraising efficiency, public support, and fiscal performance—can be viewed as unique dimensions
in judging the financial position of the foundations this study examined. Because the NPO literature fails to support any financial measure as the definitive way to judge performance but rather reveals a
confusing assortment of measures currently in use, this study’s findings help to illustrate a means for using factor analysis on a set of
NPO data to evaluate and identify distinct financial performance
categories and respective measurement ratios.
Key informant interviews with foundation executives revealed
that an important consideration in the application of performance
measures is the ease with which researchers can gather critical information. The results from this study provide the practitioner with a
parsimonious number of financial performance measures enabling
relatively easy assessment of three important performance-related
dimensions. For example, a university foundation executive may be
able to obtain a reasonably accurate assessment of fiscal performance
by selecting a measure such as total revenue divided by total expenses.
Because all of the performance measures in this study use IRS Form
990 line items, the calculation of performance outcomes for a given
year or series of years for other organizational types is readily accomplished (this information is readily accessible either within the NPO
NPO F I N A N C I A L P E R F O R M A N C E M E A S U R E M E N T
or from the Internet, for example, http://www.guidestar.com). Note
that such an evaluation would yield the most accurate results when
applied to organizations of similar type.
Another practical implication of the study is the potential to
develop a composite performance measure. The current study falls
within the bounds of “multidimensional” approaches to effectiveness
(Forbes, 1998, p. 189) and supports the thesis that “nonprofit organizational effectiveness is multidimensional and will never be
reducible to a single measure” (Herman and Renz, 1999, p. 110).
With this in mind, a practical extension of the results would be for
each NPO to use the current findings to idiosyncratically aggregate
and weight the measures or performance categories. For example, an
NPO practitioner who feels that the three financial performance categories are equally important would likely assign an equivalent
weight of 33 percent to each category, whereas stakeholders of a new
foundation might elect to emphasize fundraising by placing a higher
percentage weight (for example, 40 percent) on the public support
category than on other categories. In addition to the specific measurement applications identified earlier, this study’s results provide a
tested set of financial ratios by which practitioners might compare
financial performance measurement of university foundations.
The implications of identifying financial performance categories
offer benefits to researchers as well. The issue of generalizability of
research findings is an important consideration among researchers.
As mentioned earlier, some have argued that the development of
context-specific measures of performance (applying to a single organization) has significantly slowed the progression toward consensus
of appropriate performance measures (Herman and Renz, 1999). The
results of the current study help to reverse this trend by offering
researchers a process for evaluating the distinctiveness of financial
performance measures as applied to one type of nonprofit. Further,
although definitions and taxonomies of NPOs (see Hall, 1987;
Wooten, 1975; Oster, 1995; and Galaskiewicz and Bielefeld, 1998)
have helped to demarcate more clearly between NPO types, current
knowledge still has a great void about similar “performance types.”
By performance type, we mean organizations that may use similar
financial measures of performance even if they are not the same nonprofit classification (for example, when financial measures factor into
similar categories for different nonprofit organization categories, such
as social services, arts, and education). The identification and testing of performance measures and domains in one NPO sector (for
example, university foundations) opens wide the opportunity to evaluate organizations in other NPO sectors in search of commonalities
between industries. Large data-gathering projects, similar to the work
of the NCCS, have laid the foundation for classification schemas in
their application of the NTEE. The next logical step for researchers
is to test different types of NPOs (using schemas such as the NTEE)
with the performance measures from the current findings to ascertain
377
378
RITCHIE, KOLODINSKY
The results also
offer researchers
a less expensive
starting point for
measuring NPOs’
financial
performance
applicability. Future research could better assess NPO types and
the degree to which they cluster in terms of financial performance
measurement.
The results also offer researchers a less expensive starting point
for measuring NPOs’ financial performance. For example, researchers
studying human services-oriented NPOs (such as hospice facilities)
may find this study’s financial performance categories helpful for
examining the distinctiveness of similar categorical measures.
Researchers may also find this study helpful in considering new conceptual ideas about assigning weights for financial performance measurement. For example, although fundraising is typically an
important part of assessing NPO financial performance, it may be
more important to new direct-support organizations than to older,
well-established fee-for-service organizations.
Another possible extension of this study is to test its financial
performance measures and categories with other measures currently
in use (for example, socially constructed measures like executive perceptions of performance) to determine whether the two converge.
Although recent research has emphasized the use of various social
performance measures (Herman and Renz, 1998), developing and
implementing these measures tends to be time-consuming. Financial
measures, despite their shortcomings in contrast to social measures,
generally are more objective and arguably more convenient to use.
A study might also consider the factors that influence convergence or divergence between financial performance measures and the
socially constructed measures. Such a comparison would provide
greater insight regarding the accuracy with which NPO executives,
for example, subjectively assess their organizations’ performance.
We should mention this study’s limitations. One is that the analysis of measures is centered on one type of NPO, university foundations; therefore, the findings may not be generalizable to other NPO
types. Although we eliminated a number of problematic measures
(for example, investment performance measures) during Phase 1,
some accounting-related shortcomings are also associated with the
use of financial performance measures. For example, managers have
admitted to classifying fund balances in a manner that improves their
NPO’s image in the eyes of fund providers (Froelich and Knoepfle,
1996). A focus on the fiscal performance measures might prompt
managers to cut back on expenses in an effort to meet short-term
organizational goals. In this case the NPO might demonstrate high
short-term performance results yet fall short of delivering missioncritical services, creating performance problems in the long run.
Another related limitation involves the varying treatments of depreciation, revenue from investments, inventory valuation, and methods of consolidating accounts, all of which may be cause for
discrepancies in the current analysis. Specifically, because each of the
performance categories involves financial ratios, incremental changes
in account balances that compose the ratios may result in inaccurate
NPO F I N A N C I A L P E R F O R M A N C E M E A S U R E M E N T
presentations of performance. Performance ratios involving total
assets are particularly sensitive to such adjustments because other
financial accounts (for example, depreciation) are factored into the
calculation of total assets. An additional accounting-related issue is
the fact that the data in this study is limited to information from IRS
Form 990. Foundations with income less than $25,000 per year are
not likely to be included in the study because they are not required
to file the form. The inclusion of smaller foundations would likely
have had an impact on the current results. In view of this, the results
of this study will be most accurately applied across Carnegie class 1,
2, and 3 organizations.
Another area of concern is the investment performance category.
The measures in this original category may prove to be particularly
problematic in very strong or very weak markets. For example, organizations that emphasize investment performance may be inadvertently
providing an incentive for managers to invest NPO assets in financial
instruments that exceed established risk thresholds. Although such an
organization may demonstrate superior financial performance in the
investment performance category, this may not accurately reflect its
overall performance. Similarly, organizations that are heavily invested
during market downturns may demonstrate abnormally low performance. For example, recent reports indicate that university endowments lost nearly 3.6 percent of their investments in 2001 (Golden and
Forelle, 2002), offering a sobering view of how economic downturns
can wreak havoc on an organization. Such high-risk investment strategies could result in disaster for NPOs as well. Future research on measuring the investment risks that foundations take would help both
researchers and practitioners better evaluate such strategies.
Last, the final measures and categories shown in Table 5 are not
necessarily exhaustive when one considers NPO types other than
foundations. This is particularly true for NPOs that have substantially different missions and services from those foundations. Therefore, some of the original ratios used in this study may in fact be
useful performance measures when applied to other NPO types.
Future research might use a similar analysis to determine the extent
to which this may apply.
WILLIAM J. RITCHIE is assistant professor of management at Florida Gulf
Coast University and teaches strategic management. He earned his Ph.D.
in strategy at Florida State University and has served in management
and fundraising positions in a variety of nonprofit organizations.
ROBERT W. KOLODINSKY is assistant professor of management at James
Madison University in Harrisonburg, Virginia. He received his Ph.D.
at Florida State University in organizational behavior and human
resources management, and is a three-time small business owner and a
small business founder.
379
Recent reports
indicate that
university
endowments lost
nearly 3.6
percent of their
investments in
2001, offering a
sobering view of
how economic
downturns can
wreak havoc on
an organization
380
RITCHIE, KOLODINSKY
References
Berman, E. M. Productivity in Public and Nonprofit Organizations.
Thousand Oaks, Calif.: Sage, 1998.
Cameron, K. S. “Assessing Organizational Effectiveness in Institutions of Higher Education.” Administrative Science Quarterly, 1978,
23, 604–632.
Cameron, K. S. “Effectiveness as Paradox: Consensus and Conflict in
Conceptions of Organizational Effectiveness.” Management Science,
1986, 32, 539–553.
Forbes, D. P. “Measuring the Unmeasurable: Empirical Studies of
Nonprofit Organization Effectiveness from 1977 to 1997.” Nonprofit and Voluntary Sector Quarterly, 1998, 27 (2), 183–202.
Froelich, K. A., and Knoepfle, T. W. “Internal Revenue Service 990
Data: Fact or Fiction.” Nonprofit and Voluntary Sector Quarterly,
1996, 25 (1), 40–52.
Galaskiewicz, J., and Bielefeld, W. Nonprofit Organizations in an Age
of Uncertainty: A Study of Organizational Change. Hawthorne, N.Y.:
Aldine de Gruyter, 1998.
Golden, D., and Forelle, C. “Colleges Feel Pinch as Endowments
Shrink.” Wall Street Journal, July 19, 2002, p. B1.
Green, J., and Griesinger, D. “Board Performance and Organizational
Effectiveness in Nonprofit Social Service Organizations.” Nonprofit
Management and Leadership, 1996, 6 (4), 381–402.
Greenlee, J. S., and Bukovinsky, D. “Financial Ratios for Use in the
Analytical Review of Charitable Organizations.” Ohio CPA Journal,
Jan.–Mar. 1998, pp. 32–38.
Hair, J. F., Anderson, R. E., Tatham, R. C., and Black, W. C. Multivariate Data Analysis. (5th ed.) Upper Saddle River, N.J.: Prentice
Hall, 1998.
Hall, P. D. “A Historical Overview of the Private Nonprofit Sector.”
In W. W. Powell (ed.), The Nonprofit Sector: A Research Handbook.
New Haven, Conn.: Yale University Press, 1987.
Hatcher, L., and Stepanski, E. J. A Step-by-Step Approach to Using the
SAS System for Univariate and Multivariate Statistics. Cary, N.C.:
SAS Institute, 1994.
Herman, R. D., and Renz, D. O. “Multiple Constituencies and the
Social Construction of Nonprofit Organization Effectiveness.”
Nonprofit and Voluntary Sector Quarterly, 1997, 26, 185–206.
Herman, R. D., and Renz, D. O. “Nonprofit Organizational Effectiveness: Contrasts Between Especially Effective and Less Effective Organizations.” Nonprofit Management and Leadership, 1998, 9 (1), 23–38.
Herman, R. D., and Renz, D. O. “Theses on Nonprofit Organizational
Effectiveness.” Nonprofit and Voluntary Sector Quarterly, 1999,
28 (2), 107–125.
Hoefer, R. “Accountability in Action? Program Evaluation in
Nonprofit Human Service Agencies.” Nonprofit Management and
Leadership, 2000, 11 (2), 167–177.
NPO F I N A N C I A L P E R F O R M A N C E M E A S U R E M E N T
Jackson, D. K., and Holland, T. P. “Measuring the Effectiveness of
Nonprofit Boards.” Nonprofit and Voluntary Sector Quarterly, 1998,
27, 159–182.
Kanter, R. M., and Summers, D. V. “Doing Well While Doing Good:
Dilemmas of Performance Measurement in Nonprofit Organizations and the Need for a Multiple Constituency Approach.” In
W. W. Powell (ed.), The Nonprofit Sector: A Research Handbook. New
Haven, Conn.: Yale University Press, 1987.
Kirchoff, B. A. “Organizational Effectiveness Measurement and Policy Research.” Academy of Management Review, 1977, 2 (3),
347–355.
Murray, V., and Tassie, B. “Evaluating the Effectiveness of Nonprofit
Organizations.” In R. D. Herman (ed.), Jossey-Bass Handbook of Nonprofit Leadership and Management. San Francisco: Jossey-Bass, 1994.
Nunnally, J. C., and Bernstein, I. H. Psychometric Theory. (3rd ed.)
New York: McGraw-Hill, 1994.
Oster, S. M. Strategic Management for Nonprofit Organizations: Theory and Cases. New York: Oxford University Press, 1995.
Rojas, R. R. “A Review of Models for Measuring Organizational Effectiveness Among For-Profit and Nonprofit Organizations.” Nonprofit
Management and Leadership, 2000, 11 (1), 97–104.
Siciliano, J. I. “The Relationship of Board Member Diversity to Organizational Performance.” Journal of Business Ethics, 1996, 15,
1313–1320.
Siciliano, J. I. “The Relationship Between Formal Planning and
Performance in Nonprofit Organizations.” Nonprofit Management
and Leadership, 1997, 7 (4), 387–403.
Stone, M. M., Bigelow, B., and Crittenden, W. “Research on Strategic
Management in Nonprofit Organizations.” Administration and
Society, 1999, 31 (3), 378–423.
Stone, M. M., and Crittenden, W. “A Guide to Journal Articles on
Strategic Management in Nonprofit Organizations, 1977–1992.”
Nonprofit Management and Leadership, 1993, 4 (2), 193–213.
Tosi, H. L., Werner, S., Katz, J. P., and Gomez-Mejia, L. R. “How
Much Does Performance Matter? A Meta-Analysis of CEO Pay
Studies.” Journal of Management, 2000, 26 (2), 301–339.
Tuckman, H. P., and Chang, C. F. “How Pervasive Are Abuses in
Fundraising Among Nonprofits?” Nonprofit Management and
Leadership, 1998, 9 (2), 211–221.
Urban Institute. National Taxonomy of Exempt Entities Core Codes.
Washington, D.C.: National Center for Charitable Statistics, Urban
Institute, 1998.
Woo, C. Y., and Willard, G. “Performance Representation in Strategic
Management Research: Discussions and Recommendations.” Paper
presented at the 23rd annual national meetings of the Academy of
Management, Dallas, Aug. 1983.
Wooten, L. M. “Management in the Third Sector.” Public Administration Review, 1975, 444–455.
381
Download