MEMORANDUM
Date: February 4, 2003
To: The Commissioner
From: Inspector General
Subject: Performance Indicator Audit: Customer Satisfaction (A-02-02-11082)
We contracted with PricewaterhouseCoopers (PwC) to evaluate the data used to measure 18 of the Social Security Administration's (SSA) Fiscal Year 2002 performance indicators established to comply with the Government Performance and Results Act. The attached final report presents the results of two of the performance indicators PwC reviewed. The objective of this audit was to assess the reliability of the data used to measure the percent of people doing business with SSA who rate their overall satisfaction as good, very good or excellent.
Please comment within 60 days from the date of this memorandum on corrective action taken or planned on each recommendation. If you wish to discuss the final report, please call me or have your staff contact Steven L. Schaeffer, Assistant Inspector General for Audit, at (410) 965-9700.
James G. Huse, Jr.
OFFICE OF
THE INSPECTOR GENERAL
SOCIAL SECURITY ADMINISTRATION
PERFORMANCE
INDICATOR AUDIT:
CUSTOMER SATISFACTION
February
2003
A-02-02-11082
AUDIT REPORT
Mission
We improve SSA programs and operations and protect them against fraud, waste, and abuse by conducting independent and objective audits, evaluations, and investigations. We provide timely, useful, and reliable information and advice to Administration officials, the Congress, and the public.
Authority
The Inspector General Act created independent audit and investigative units, called the Office of Inspector General (OIG). The mission of the OIG, as spelled out in the Act, is to:
Conduct and supervise independent
and objective audits and investigations relating to agency programs and operations.
Promote economy, effectiveness, and efficiency within the agency.
Prevent and detect fraud, waste, and abuse in agency programs and operations.
Review and make recommendations regarding existing and proposed legislation
and regulations relating to agency programs and operations.
Keep the agency head and the Congress fully and currently informed of problems
in agency programs and operations.
To ensure objectivity, the IG Act empowers the IG with:
Independence to determine
what reviews to perform.
Access to all information necessary for the reviews.
Authority to publish findings and recommendations based on the reviews.
Vision
By conducting independent
and objective audits, investigations, and evaluations, we are agents of positive
change striving for continuous improvement in the Social Security Administration's
programs, operations, and management and in our own office.
MEMORANDUM
To: Office of the Inspector General
From: PricewaterhouseCoopers LLP
Date: January 27, 2003
Subject: Performance Indicator Audit: Customer Satisfaction (A-02-02-11082)
The Government Performance and Results Act (GPRA) of 1993 requires the Social Security Administration (SSA) to develop performance indicators that assess the relevant service levels and outcomes of each program activity set forth in its budget. GPRA also calls for a description of the means employed to verify and validate the measured values used to report on program performance. The objective of this audit was to assess the reliability of the data used to measure the following Fiscal Year (FY) 2002 GPRA performance indicators:
Performance Indicators FY 2002 Goal
Percent of people who do business with SSA rating the overall 82% service as "excellent," "very good," or "good."
Percent of people who do business with SSA rating the overall 30% service as "excellent."
Because FY 2002 survey data and results were not available at the time of this audit, we used the latest available data (FY 2001) in our audit. See Appendix A for a description of the audit scope and methodology.
BACKGROUND
SSA offers retirement and long-term disability programs to the general public. Old-Age, Survivors, and Disability Insurance (OASDI) is authorized under title II of the Social Security Act. Through the OASDI program, eligible workers and sometimes their family receive monthly benefits if they retire at an appropriate age or are found to have a disability that either prevents them from engaging in substantial gainful activity for at least 12 months or can be expected to result in death. Supplemental Security Income (SSI) is authorized under title XVI of the Social Security Act and provides monthly payments to aged, blind, and disabled individuals based on financial need and medical requirements.
One of SSA's strategic goals is to provide world-class customer service for individuals participating in the OASDI and SSI programs. In its FY 2002 Annual Performance Plan, SSA included two performance indicators with respect to customer satisfaction. The first performance indicator measures the percent of core business customers who rated the service received as "excellent," "very good," or "good" (E/VG/G) on a 6-point scale ranging from "excellent" to "poor." The percentage is calculated by dividing the number of E/VG/G responses by the total number of responses. The second performance indicator measures the number of core business customers who rated the overall service as "excellent" on a 6-point scale ranging from "excellent" to "poor." The percentage is calculated by dividing the number of "excellent" responses by the total number of responses.
To assess its progress in meeting this goal, SSA developed a strategy to track customer satisfaction with SSA interactions under the agency's Market Measurement Program. The interaction tracking surveys measure customers' satisfaction with their last contact with SSA. They consist of three surveys: the 800-Number Customer Survey, the Field Office (FO) Caller Survey, and the Office Visitor Survey. To report the final customer satisfaction indicators, the Office of Quality Assurance and Performance Assessment (OQA) combines the customer satisfaction from the three surveys, weighting each survey by the customer universe it represents.
The 800-Number Customer Survey evaluates the satisfaction of customers who call SSA's 800-number. When a customer calls the toll free number, the Automatic Number Identifier (ANI) system collects data about the call, i.e., phone number, date, time, and duration. OQA selects a random sample of completed calls over a four-week period twice a year (principally August and February). OQA excludes calls from blocked numbers, businesses, pay phones, or other locations where the customer is not identifiable. OQA has a contractor call the customer, conduct the survey, and compile the responses. OQA analyzes and reports the results.
The FO Caller Survey evaluates
the satisfaction of customers who call a selected FO. OQA selects approximately
110 FOs to participate in the FO Monitoring Survey. From this, OQA selects approximately
50 FOs each year to participate in the FO Caller Survey. (See Appendix D for
more description of the selection process.) The FO Caller Survey is performed
twice a year for four weeks each time, encompassing most of November and May.
During the survey period, each FOs' caller ID System records the date and time
of contact, length of call, and phone number of the caller. FOs then report
this information to OQA, who selects a random sample of callers to participate
in the survey. OQA excludes calls from blocked numbers, businesses, pay phones,
or other locations where the customer is not identifiable. OQA has a contractor
conduct the survey via phone and compile the responses. OQA analyzes and reports
the results.
The Office Visitor Survey evaluates the satisfaction of customers who visit
a participating FO or Hearing Office (HO). OQA makes 2 separate selections of
52 FOs and 13 HOs to participate for 1 week in the survey. (See Appendix D for
more description of the selection process.) OQA staggers the FO or HO participating
in the survey over an 8-week period, during the second and fourth quarters of
the FY. The fourth quarter survey starts at the end of July and extends through
mid September and the second quarter survey starts at the end of January and
extends through late March. The FO or HO records each customer's name, address,
telephone number, and reason for the visit and forwards this information electronically
to OQA daily. Twice a week, OQA selects a random sample of customers to participate
in the survey. A contractor mails the survey to the selected customers. Customers
are asked to return the survey directly to OQA, who analyzes and reports the
results.
Appendix C provides a workflow and description of each survey.
RESULTS OF REVIEW
From May 2002 to July 2002, we reviewed the processes, controls, and data used to generate the FY 2001 customer satisfaction performance indicators:
1. Percent of people who
do business with SSA rating the overall service as "excellent," "very
good," or "good."
2. Percent of people who do business with SSA rating the overall service as
"excellent."
Overall, we found that the indicators were accurately and reasonably calculated in FY 2001. We also reviewed the methodology SSA planned to use to calculate these indicators in FY 2002. Although the final FY 2002 data and results were not available for this report, SSA indicated they will use the same approach that was used in FY 2001 for calculating the performance indicators in FY 2002.
We identified areas where methods used to measure the indicators may be improved. The indicator results could better reflect the satisfaction of all people who do business with SSA. Also, SSA could implement better controls to enhance the reliability of the results.
PERFORMANCE INDICATOR DATA WAS RELIABLE
We reviewed the processes and controls for the FY 2001 and FY 2002 interaction tracking surveys administered by OQA from July 2000 to June 2002. Our review did not identify any differences between the 2 years that would affect performance indicator results. At the time of our review, OQA had not completed processing of the FY 2002 surveys and had not developed the final FY 2002 performance indicator results. Therefore, we conducted our audit of the data and performance indicators for the most recent available time period, FY 2001.
Our overall assessment is that the weighted average survey results are a reasonable calculation of the overall customer satisfaction performance indicators. (See Appendix D for a description of the weighting process.) The three surveys that form the basis of the indicators were developed and administered accurately. For FY 2002, the methodology used to define the sample frame, perform a random sample selection, and conduct the survey is appropriate for the type of survey estimates produced and is expected to give a statistically representative result of the population being measured. Positive elements we observed in the survey process included:
A well-structured approach
to defining and listing the sample frame and selecting the sample.
A survey approach that optimized response rate within the available resources.
Thorough controls for reviewing the survey data collected by the contractors.
Trained and knowledgeable staff within OQA to produce the survey results.
Despite the good execution
of the current set of surveys, there is some risk that the survey results calculated
are not representative of all people who do business with SSA.
We successfully replicated the survey results reported by OQA for FY 2001 and
believe the final performance indicator results are accurate. However, we noted
weaknesses in some internal controls for calculating the survey results.
SURVEY RESULTS MAY NOT ADEQUATELY REFLECT THE ENTIRE CUSTOMER POPULATION
Our audit identified two areas of potential concern about whether the performance indicators reflect the intended "population of people who do business with SSA."
Certain SSA Transactions Are Not Represented in Any of the Surveys
Mail Transactions - OQA
acknowledged that they do not have a survey to formally review customer satisfaction
with mail transactions. They do not have statistics available on the relative
size of this customer segment, although their consensus was that it is shrinking.
OQA noted that it is hard to determine the number of mail transactions because
there are several places where SSA receives mail transactions.
Internet Transactions - OQA indicated they have Internet surveys on customer
satisfaction through the SSA web site. However, OQA does not include the results
of these surveys because the survey is self-selecting and only those customers
interested in completing the survey would be included. As a result, the Internet
survey results are not included in the formal customer satisfaction performance
indicators.
The satisfaction of mail and Internet customers may be significantly different than the customer satisfaction currently represented in the overall performance indicators. If so, the current performance indicators are biased estimates of the true level of satisfaction of people who do business with SSA. If the satisfaction levels for mail and Internet transactions are similar to the overall results of the current surveys or if the populations represented by these types of transactions are small, the effect of the bias is small. Without further information, we are not able to estimate the magnitude of this particular finding but we do note that it is a risk.
SSA's FY 2001 Performance and Accountability Report specifies that the customer satisfaction indicators are calculated based on service contacts, "either by telephone or in-person." Thus, the reported indicators properly inform the reader of which populations are represented by the results.
Surveys Are Not Performed on a Continuous Basis, But Over Discrete Time Periods
SSA conducts the 800-Number Customer and FO Caller surveys biannually. Each time, the survey extends over a 4-week period. The 800-Number Customer surveys are done in August and February and the FO Caller surveys are done in November and May.
SSA also conducts the Office Visitor surveys biannually. Each time, the survey extends over an 8-week period. For the Office Visitor surveys, the first survey starts at the end of July and extends through mid September and the second starts at the end of January and extends through late March. As a result, OQA conducts very few or no interaction tracking surveys in more than 6 months of the year (January, April, June, July, October, and December). It is unlikely that the customer satisfaction from the interaction tracking surveys exactly matches the true customer satisfaction across the entire year.
However, we cannot exactly quantify the impact of this observation on the results for FY 2001. Based on the current survey design, we can make the following statements:
For 800 Caller and FO Caller
surveys, each percentage point that customer satisfaction is lower or higher
during the non-sampled time periods, the survey results are overstated or understated
respectively by 0.85 percentage points.
For Office Visitor surveys, each percentage point that customer satisfaction
is lower or higher during the non-sampled time periods, the survey results are
overstated or understated respectively by 0.69 percentage points.
(Note: these values are approximated and do not take into account survey weighting or national holidays. See Appendix D for more detail about the calculations.)
OQA does not believe that customer satisfaction fluctuates enough to warrant performing the surveys on a continuous basis. As support, OQA explained that it once conducted the 800-Number Customer surveys quarterly and that they reduced the surveys to twice per year because of the consistency of the results. Furthermore, OQA has informed SSA that it will begin conducting each of the interaction surveys only once per year, effective in FY 2003 due to workload and budgetary constraints.
To investigate the contention of stable survey results, we examined some of the recent 800-Number Customer caller results, shown below. The results from each time period within each FY do not differ greatly and in most cases do not represent a statistically significant difference. However, excluding one of the surveys can change the overall satisfaction measure. For example, if the February 2001 results were the only 800-Number Customer survey results used to calculate the final FY 2001 performance indicator, which combines results from all three surveys, the percentage of people rating service as E/VG/G would have been 80 percent, instead of the reported 81 percent. The variability in the FO Caller and Office Visitor surveys is similar to that of the 800-Number Customer.
Figure 1: 800-Number Customer
Survey Results
Survey Type Time Period Reporting Year Excellent E/VG/G
800-Number Customer Aug-98 FY1999 33 83
800-Number Customer Feb-99 FY1999 33 84
800-Number Customer Aug-99 FY 2000 27 80
800-Number Customer Feb-00 FY 2000 27 81
800-Number Customer Aug-00 FY 2001 25 81
800-Number Customer Feb-01 FY 2001 25 79
As a result, we find that having only 8 or 16 weeks of measurement does not adequately guard against the risk of generating a biased result, one that is not reflective of customer satisfaction throughout the calendar year. In our opinion, conducting the survey once a year will further reduce the validity of projecting its results to an annual performance indicator.
WEAKNESS IN CONTROLS
During our audit of the customer service performance indicators, we found weaknesses in controls surrounding three areas.
Subjective Determination of Survey Responses
During our audit, we obtained a sample of 45 customer responses from the Office Visitor mail survey. We compared the written responses to OQA's data to validate that the data had been transcribed correctly. On one form, the respondent left the question for overall satisfaction blank, but a value of "excellent" was recorded in the data base. OQA staff indicated that they determined the appropriate rating from the respondent's comments on the questionnaire. This approach carries a risk of biasing the results or creating the appearance of biasing results. We would not characterize this approach as consistent with best practices in survey research.
Risk for Producing Inaccurate Survey Estimates
When OQA provided us with the FY 2001 Office Visitor survey data, it recognized that some valid data records had been excluded from the original calculation of the survey estimates because the records were not assigned survey weights. Adjusting the data to include these missing records and their correct weights did not result in a change to the performance indicators. However, this data exclusion illustrates a lack of controls in validating that survey weights are associated with all valid records before calculation of final survey estimates. While the final FY 2001 performance indicator estimates were not impacted, there could be discrepancies in the future.
Lack of Documentation of
Methodology to Combine the Survey
Results into a Single Annual Indicator
We reviewed OQA's methodology and process for combining the FY 2001 survey results into the final satisfaction performance indicators. To evaluate OQA's methodology, we analyzed the spreadsheet used to calculate the final results, reviewed other documents showing the derivation of inputs to the process, and discussed the methodology and process with OQA staff. While we determined that the methodology was reasonable and we were able to replicate the final results, the documentation was insufficient in two ways:
It was not generated in
advance of the survey year.
It was not complete. The sources of information for the process were mostly
oral discussions with OQA and inference based on the equations in the computer
files provided.
Office of Management and Budget Circular Number A-123, Management Accountability and Control, Section II, page 6, states, in part, that, "The documentation for transactions, management controls, and other significant events must be clear and readily available for examination." This lack of documentation could affect OQA succession planning and leave the appearance of impropriety in future years. For example, the way in which rounding is performed could make the difference between achieving or failing to meet performance indicator goals. Because of this, it is important for SSA to have documentation for its process of combining multiple survey results into a single indicator.
CONCLUSIONS AND RECOMMENDATIONS
Our overall conclusion is that the key performance indicators are accurate. OQA derives the final results from a set of interaction tracking surveys that are well-conceived and developed. OQA administers the surveys using generally acceptable survey research methods. However, our audit identified five opportunities for improvements. Our recommendations are as follows:
1. Incorporate additional customer populations in surveys
We recommend OQA incorporate additional customer populations, i.e., Internet and mail, to improve the representativeness of the performance indicators. Excluding possible customer populations is inconsistent with SSA's strategic goal of delivering citizen-centered, world-class service. Due to differences in the statistical approach between the Internet surveys and the Interaction Tracking surveys, we recommend OQA initially report the Internet customer satisfaction results separately from the Interaction Tracking results. In the long run, we recommend OQA incorporate the Internet survey results with the other service delivery channel results, as the Internet is a growing service delivery channel.
OQA should develop annual estimates of the relative size of all non-trivial service delivery channels using the best available current information. If a service delivery channel cannot reasonably be included annually, OQA could review that channel less frequently and use the results from the latest survey in the final performance indicators. Alternatively, OQA could estimate the potential bias from the excluded channels and report this bias with the final performance indicators.
2. Redesign survey schedule across entire calendar year
We recommend OQA redesign its schedule to conduct the surveys to extend across the entire calendar year for each service delivery channel. OQA can maintain the same total sample size with fewer customers surveyed in each time period. For example, OQA selects six or seven FOs each week over the 16-week sample period for the biannual Office Visitor surveys. OQA could instead select two FOs per week over the entire year.
We recognize that distributing the same survey sample over the entire year increases OQA's time and expense to administer the surveys. We recommend OQA explore several different options to balance the additional effort required for a continuous survey. These can include:
Developing more automated
procedures to execute the sample selection and survey data analysis. Our audit
identified manual steps in the survey process that could be automated through
time-saving measures such as "batch programs" or "macros."
Reducing the scope of the survey questions. OQA indicated that the interaction
tracking surveys are primarily used for public and not internal agency management
information. OQA could eliminate the non value-added questions and reduce the
size of the survey to only measure customer satisfaction.
Providing more timely information regarding service. OQA may decide to use these
surveys as a meaningful management tool by providing service performance feedback
to FOs, HOs, and Teleservice Centers. This might justify devoting more resources
to the survey rather than less. OQA can develop a comprehensive and ongoing
survey program that reports on their ability to meet performance goals and provides
valuable information for promoting and delivering better quality service. If
the performance indicators do not add this value, OQA can reconsider whether
they are the correct indicators to use.
OQA can determine what combination of the above recommendations provides the most value to the organization.
3. Eliminate subjective determination of survey responses
We recommend that OQA not infer the value of missing responses on the Office Visitor surveys from other responses or comments. The gain realized in response rate is not sufficient to balance the risks associated with this subjective approach.
4. Incorporate internal controls to calculate final data
We recommend OQA add internal controls to validate the final performance indicator data to ensure that survey weights are not inadvertently omitted and have been associated with the appropriate data record. OQA should use this as an opportunity to review and update all their current quality review procedures for the final indicator calculations.
5. Improve methodology documentation for combining survey results into an annual result
We recommend OQA develop documentation that specifies its FY 2003 methodology for combining the survey results. This documentation should contain the following information:
Which survey results will
be included in the final calculations.
Whether the final results will be weighted and, if so, how the weights will
be calculated.
The algorithm for calculating the final indicators, including the exact equation
and any relevant rounding conventions that will be used.
OQA should review this documentation prior to conducting the first survey in FY 2003 and beyond. Any necessary edits to the procedures should be made and explained within the documentation. This documentation can also serve as an audit trail of changes made to the methodology over time.
AGENCY COMMENTS
SSA partially agreed with Recommendation 1. SSA agreed that it should measure the satisfaction of customers who use SSA's Internet services. However, SSA disagreed that it should measure the satisfaction of customers who contact SSA via the mail. SSA believes that the mail is not a major service delivery channel and there would be challenges developing a sample frame. SSA disagreed with Recommendation 2 and believes that the benefits of extending the survey schedule across the sample year would be less than the additional effort and expense. Further, SSA stated that they recently decided to reduce the frequency of these surveys to once per year. SSA agreed with Recommendations 3 and 4. SSA disagreed with Recommendation 5 and believes their existing documentation is sufficient. The full text of SSA's comments can be found in Appendix E.
PWC RESPONSE
With respect to Recommendation 1, PwC continues to believe that a review of all service channel delivery methods, including the mail, would be beneficial to SSA. SSA could survey mail customers less frequently, depending on the size of the population and the difficulty of establishing a sampling frame. With respect to Recommendation 2, PwC continues to believe that SSA should explore options for conducting surveys over more of the year. Our suggestions were not intended to be inclusive of all options. We believe that there are other options that SSA could explore to control costs while conducting the survey over more of the year. For Recommendation 5, we reviewed the referenced materials, but continue to believe that SSA should document its methodology for combining the survey results prior to the start of the fiscal year. This would provide SSA with a complete audit trail and be in full compliance with Office of Management and Budget's documentation requirements. The documentation should reference the spreadsheets used, in addition to the methodology and common practices, i.e., rounding to decimal places. This would also ensure that common practices are consistent over time.
Appendices
APPENDIX A - Scope
and Methodology
APPENDIX B - Acronyms
APPENDIX C - Flowcharts
and Descriptions
APPENDIX D - Statistical
Appendix
APPENDIX E - Agency
Comments
Appendix A
Scope and Methodology
We conducted this audit to examine the Social Security Administration's (SSA) Fiscal Year (FY) 2002 customer satisfaction performance indicators. SSA developed these performance indicators to meet the requirements of the Government Performance and Results Act (GPRA) of 1993. Because FY 2002 survey data and results were not available at the time of this audit, we used the latest available data (FY 2001) in our audit of the controls and the final reported performance indicators. In addition, we evaluated differences in methodology between FY 2001 and FY 2002.
To test the accuracy and reliability of the customer satisfaction performance data, we:
Obtained FY 2001 data used
to calculate the performance indicators, including data from the 800-Number
Customer survey, the Field Office Caller survey, and the Office Visitor survey.
Recalculated the FY 2001 customer service satisfaction.
Evaluated the validity of Office of Quality Assurance and Performance Assessment's
(OQA) FY 2001 methodology used to calculate the performance indicators.
Reviewed FY 2001 and FY 2002 procedures for sample selection.
Evaluated the FY 2001 processes to execute the survey.
Reviewed 45 surveys from FY 2001 to test internal controls for data entry.
Evaluated the differences between the FY 2001 and FY 2002 survey procedures.
Documented our understanding of the FY 2002 survey process.
In conducting this audit, we also:
Reviewed SSA's Performance
and Accountability Report for FY 2001, SSA's Annual Performance Plan for FY
2001, and SSA's Revised Final Performance Plan for FY 2002 to determine the
baseline data, definition, and data source for the performance indicator.
Reviewed GPRA and Office of Management and Budget guidance related to GPRA.
Reviewed internal PricewaterhouseCoopers documentation on previous survey reviews.
Interviewed Office of Strategic Management staff regarding the methodologies
of the surveys.
Interviewed OQA staff to gain an understanding of the sampling process, the
statistical methods and other procedures used to produce the performance data.
Our audit was limited to
testing at SSA's Headquarters in Woodlawn, Maryland. The procedures we performed
were in accordance with the American Institute of Certified Public Accountants'
Statement on Standards for Consulting Services and the General Accounting Office's
Government Auditing Standards for performance audits.
Appendix B
Acronyms
ANI Automatic Number Identifier
E/VG/G "Excellent"/ "Very Good"/ "Good"
FO Field Office
FY Fiscal Year
GPRA Government Performance and Results Act
HO Hearing Office
OASDI Old-Age, Survivors, and Disability Insurance
OQA Office of Quality Assurance and Performance Assessment
OSM Office of Strategic Management
OSSAS Office of Statistics and Special Area Studies
PwC PricewaterhouseCoopers LLP
SSA Social Security Administration
SSI Supplemental Security Income
Appendix C
Flowcharts and Descriptions
Customer Service Survey - 800-Number Customer:
The customer calls Social
Security Administration's (SSA's) 800-Number.
The Automatic Number Identifier (ANI) system records the customer data.
MCI furnishes SSA with the ANI data.
Office of Statistics and Special Areas (OSSAS) selects the completed calls within
the sampling period from the ANI data. A completed call is a call where the
customer has selected to speak with a SSA representative or selected an option
from the automated menu.
OSSAS selects the eligible calls from the completed calls. An eligible call
is one that has been made between 7 a.m. and 7 p.m. local time, came from a
phone number that made less than 100 calls to SSA that day, and was made during
the sample period.
OSSAS selects a random sample of callers to participate in the survey from the
list of eligible calls.
OSSAS sends an electronic file with the selected customers' information to the
contractor.
The contractor administers the survey.
The contractor compiles survey responses and sends them electronically to OSSAS.
OSSAS applies survey weights to the sample data and calculates the final survey
result.
OSSAS analyzes the final results.
OSSAS writes and publishes a report on customer satisfaction and the survey.
OSSAS distributes the report throughout SSA.
OSSAS analyzes the survey results for the Government Performance and Results
Act (GPRA) performance indicator.
OSSAS combines the results from surveys and weights them by the customer universe.
OSSAS reports customer satisfaction for the GPRA performance indicator to Office
of Strategic Management (OSM).
OSM publishes the GPRA results.
Customer Service Survey - FO Caller:
OSSAS selects the Field
Offices (FOs) to participate in the survey.
Office of Telecommunications and Systems Operations arranges for the installation
of a Caller ID and other equipment in the selected FOs.
Customers call the FO.
The telephone contractor downloads and extracts customer information from the
Caller ID system.
The contractor sends an electronic file of all the callers to OSSAS.
OSSAS extracts the data from the electronic file.
OSSAS selects the eligible FO callers.
OSSAS selects a sample of eligible FO callers to participate in the survey.
OSSAS sends an electronic file with the selected customers' information to the
contractor.
The contractor administers the survey.
The contractor compiles survey responses and sends them electronically OSSAS.
OSSAS applies survey weights to the sample data and calculates the final survey
result.
OSSAS analyzes the final results.
OSSAS writes and publishes a report on customer satisfaction and the survey.
OSSAS distributes the report throughout SSA.
SSAS analyzes the survey results for the GPRA performance indicator.
OSSAS combines the results from the surveys and weights them by the customer
universe.
OSSAS reports customer satisfaction for the GPRA performance indicator to OSM.
OSM publishes the GPRA results.
Customer Service Survey - Office Visitor:
OSSAS selects a random sample
of 52 FOs and 13 Hearing Offices (HO) to participate in the survey.
OSSAS notifies the FO and HO of their selection to participate in the survey.
The customer visits the FO or HO.
The FO or HO enters the customer's information into the Access data base or
other tracking system when the customer checks-in at the receptionist desk.
The FO or HO sends the electronic list of customers and their information to
OSSAS.
OSSAS selects a random sample of customers to participate in the mailed survey.
OSSAS electronically sends the names and addresses of selected customers to
the contractor.
The contractor administers the survey via mail.
The customer returns the survey to OSSAS after completion.
OSSAS enters the survey responses into Blaise.
OSSAS reviews the information entered into Blaise for completion.
OSSAS applies survey weights to the sample data and calculates the final survey
result.
OSSAS analyzes the final results.
OSSAS writes and publishes a report on customer satisfaction and the survey.
OSSAS distributes the report throughout SSA.
OSSAS analyzes the survey results for the GPRA performance indicator.
OSSAS combines the results from the surveys and weights them by the customer
universe.
OSSAS reports customer satisfaction for the GPRA performance indicator to OSM.
OSM publishes the GPRA results.
Appendix D
Statistical Appendix
1. Methodology for the selection of Field Offices (FO) and Hearings Offices (HO) to participate in Social Security Administration's (SSA) customer satisfaction surveys
800-Number Customer Survey
Each year, the Office of Quality Assurance and Performance Assessment (OQA)
selects customers who call SSA's 1-800-Number to participate in the 800-Number
Customer Satisfaction Survey. Because FOs and HOs do not provide customer service
to these customers via the 1-800-number, OQA does not select FOs or HOs to participate
in this survey.
FO Caller Survey
Each year, OQA selects a sample of 110 offices to participate in its FO Monitoring
Survey. OQA selects the sample without replacement from the current population
of eligible FOs. Eligible FOs are those that have not been selected in previous
years. OQA began this selection process in FY 2000.
The sample selection methodology first stratifies the sample frame by telephone type. The stratification is:
Telephone System Number
Selected
Executone 31
IVX 5
Fujitsu 74
Within each telephone system type, sample selection is proportional to the number of FOs within each region and for each area within a region.
From this initial sample of 110 FOs, OQA selects a sub-sample of offices to participate in the FO Caller Survey. The sub-sample is a systematic sample from the parent sample, after sorting the parent sample by telephone system, region, and area. Thus, the FO Caller survey sample has a distribution of telephone system type, region, and area similar to the parent FO Monitoring Survey sample.
In FY 2001, the FO Caller
Survey sub-sample consisted of 75 of the 110 offices from the FO Monitoring
Survey. Although phone system limitations make it impossible to sample every
office, OQA attempts to survey as many of the 75 offices as possible. For FY
2001, 49 of the 75 offices were included in the November 2000 survey and 41
of the 75 offices were included in the May 2001 survey.
Office Visitor Survey
The Office Visitor Survey is conducted at 52 FOs and 13 HOs twice each year.
For each survey execution, the offices are selected without replacement from
the current list of eligible FOs or HOs. Eligible offices are those that have
not been selected in previous years. This process began in FY 2000.
While HOs are selected as a simple random sample, OQA selects FOs by region. The number of FOs from each region is proportional to the number of FOs in that region. The distribution of FOs sampled by region is:
Region Name FOs Sampled
Boston 3
New York 5
Philadelphia 6
Atlanta 10
Chicago 9
Dallas 6
Kansas City 3
Denver 2
San Francisco 6
Seattle 2
Additionally, two extra FOs are selected for each region as backups if a FO cannot be surveyed. According to OQA, the extra offices are rarely needed. If not used, the FO may be selected in future time periods.
2. Weighting of survey estimates
SSA combines the results of the 800-Number Customer, FO Caller and Office Visitor Surveys to generate a single customer satisfaction rating. This final estimate is produced by proportionally weighting each of the component surveys by the appropriate customer universe it represents.
The final weighting used for the Fiscal Year (FY) 2001 results was as follows:
Survey FY 2001 Universe
August 2000 800-Number Customer 38,000,000
February 2001 800-Number Customer 38,000,000
November 2000 FO Caller 42,500,000
May 2001 FO Caller 42,500,000
July 2000 Office Visitor - FO 10,500,000
January 2001 Office Visitor - FO 12,000,000
July 2000 Office Visitor - HO 114,000
January 2001 Office Visitor - HO 153,000
Note that OQA weights each of the 800-Number Customer and FO Caller surveys equally at one half the estimated customer universes of 76,000,000 and 85,000,000, respectively.
3. Impact of not performing a survey continuously
Taking a survey over a limited time period and then projecting the results to a larger time period introduces a risk that the reported results may be biased. The risk is realized when the results from a time period that is not sampled are systematically different compared to the result from a time period that is sampled. We developed a simple approximation to demonstrate the possible magnitude of that risk for the customer satisfaction surveys.
800-Number Customer and
FO Caller Surveys
In the report we state that for each percentage point difference in customer
satisfaction during non-sampled time periods, OQA results are overstated or
understated by 0.85 percentage points.
This is supported by the following analysis:
Assume the true level of
customer satisfaction for the period surveyed is x%. This level is applicable
for the 8 weeks of the year for which the two surveys are conducted.
Assume the true level of customer satisfaction for the period not surveyed is
one percentage point greater, or (x+1)%. This level is applicable for the 44
weeks of the year during which no survey is conducted (52 weeks - 8 weeks).
The true level of customer satisfaction for the entire year should be:
8(x)% + 44(x+1)%
True customer satisfaction = -------------------------
52
The bias of incorrectly using x% to estimate the customer satisfaction is found
by calculating the difference between the rate from the sampled time period
and the true customer satisfaction rate or:
Bias = x% - True customer satisfaction
The bias equals -0.85 percent for a +1 percent difference.
As a result, if the non-sampled period satisfaction rate is 1 percentage point greater than the sampled period, customer satisfaction is understated by 0.85 percent. If the satisfaction rate in the non-sampled periods is 1 percentage point lower, the customer satisfaction is overstated by 0.85 percent.
Office Visitor Survey
In the report we state that for each percentage point difference in customer
satisfaction during non-sampled time periods, OQA results are overstated or
understated by 0.69 percentage points.
This is supported by the following analysis:
Assume the true level of
customer satisfaction for the period surveyed is x%. This level is applicable
for the 16 weeks of the year for which the two surveys are conducted.
Assume the true level of customer satisfaction for the period not surveyed is
one percentage point greater, or (x+1)%. This level is applicable for the 36
weeks of the year during which no survey is conducted (52 weeks - 16 weeks).
The true level of customer satisfaction for the entire year should be:
16(x)% + 36(x+1)%
True customer satisfaction = -------------------------
52
" The bias of incorrectly using x% to estimate the customer satisfaction
is found by calculating the difference between the rate from the sampled time
period and the true customer satisfaction rate or:
Bias = x% - True customer satisfaction
The bias equals -0.69 percent for a +1 percent difference.
As a result, if the non-sampled period satisfaction rate is 1 percentage point greater than the sampled period, customer satisfaction is understated by 0.69 percent. If the satisfaction rate in the non-sampled periods is 1 percentage point lower, the customer satisfaction is overstated by 0.69 percent.
The following table summarizes the potential impact of the bias for a range of differences between the sampled period and the non-sampled period.
Effect of Possible Non-Sampling
Bias on Overall Survey Results
If percentage from non-sampled period is different than for sampled period by:
Bias to overall indicator for 800-Number Customer and FO Caller Surveys Bias
to overall indicator for Office Visitor Survey
-1.0% 0.85% 0.69%
-0.5% 0.42% 0.35%
0.0% 0.00% 0.00%
0.5% -0.42% -0.35%
1.0% -0.85% -0.69%
To precisely calculate the potential bias, we would require knowledge of the exact customer universe sizes. The observations we have made represent approximate impacts because they use weeks as a proxy for the true customer universe sizes. For instance, the number of 800-Number callers during the 8 weeks of survey activity is probably, but not precisely equal to 8/52 of the total 800-Number calls for the entire year.
Appendix E
Agency Comments
MEMORANDUM
Date: January 22, 2003
To: James G. Huse, Jr.
Inspector General
From: Larry W. Dye
Chief of Staff
Subject: Office of the Inspector General (OIG) Draft Report, "Performance Indicator Audit: Customer Satisfaction" (A-02-02-11082)-INFORMATION
We appreciate OIG's efforts in conducting this review. Our comments on the draft report content and recommendations are attached.
Staff questions may be referred to Laura Bell on extension 52636.
SSA Response
COMMENTS ON THE OFFICE OF THE INSPECTOR GENERAL (OIG) DRAFT REPORT "PERFORMANCE INDICATOR AUDIT: CUSTOMER SATISFACTION" (A 02-02-11082)
We appreciate the opportunity to review and comment on the draft report. We are pleased with your conclusion that our performance indicators for fiscal year (FY) 2001 were accurately and reasonably calculated, and that the surveys from which they were derived were well conceived and developed.
With respect to the areas identified as presenting opportunities for improvement, we do not believe that the degree of precision recommended by PricewaterhouseCoopers (PwC) is warranted or justifiable from a resource perspective. This is especially true given that the indicators are based on opinion research, which is by its nature relatively soft, and not comparable to financial or accuracy data.
Finally, we would like to note that the Performance Indicator: Customer Satisfaction, was based on surveys that were done under the Agency's Market Measurement Program which have been revamped and is now called the Service Delivery Feedback Program.
Our response to the specific recommendations are provided below:
Recommendation 1
The Social Security Administration (SSA) should incorporate additional customer population surveys into their customer satisfaction performance indicator.
SSA Response
We agree that it is important to understand the needs and satisfaction levels of all who do business with SSA including those who write, call, visit our offices, and use our Internet site in order to achieve our goal of providing world-class, citizen-centered service.
We would point out that the performance indicator aims to reflect satisfaction with SSA's primary modes of service delivery which are telephone and in-person service. Mail is not a major service delivery channel through which the public initiates contact with SSA. Moreover, we would have some challenges in developing a sample frame, because we presently have no consolidated repository for recording mail contacts with SSA's over 1,300 field and headquarters components. We believe we can obtain an adequate indication of customer satisfaction without targeting those who send us mail.
With respect to the Internet, we acknowledge that this is a significant and growing area of service delivery in government, and it is important to capture feedback from those who choose to do business with us this way. As part of our overall program for obtaining public opinion on service delivery, we regularly survey visitors to ssa.gov, both through an online questionnaire, and as part of other telephone or mail-based surveys. This activity gives us valuable information on Internet user experience and satisfaction, which we can use to make improvements and support our various eGov initiatives.
The sampling methodology for carrying out our Internet surveys differs in some essential respects from that of our other satisfaction surveys. We would need to work through certain challenges to incorporate and combine the results from both surveys, in order to preserve data integrity and comparability. However, we believe it is an important area to explore, and agree to look into the possibilities of developing a statistically sound methodology for combining the results from both surveys into a single measure.
In the meanwhile, for the short term, we will also look into reporting separately on Internet satisfaction results, based on our current survey activity in this area.
Recommendation 2
SSA should redesign the survey schedule across the entire calendar year.
SSA Response
We disagree. While PwC recognized that distributing the survey samples over the entire year will increase our time and expense to administer the surveys, we do not believe that the options they offer would significantly balance the additional effort and expense that would be required. In addition, we have recently decided to reduce the frequency of these surveys to once per year, reflecting their main function as a gauge of public opinion rather than a tool for managing service delivery which is better addressed by local data-gathering.
Recommendation 3
SSA should eliminate subjective determinations of survey responses.
SSA Response
We agree that subjective determinations should not occur when evaluating customer satisfaction survey responses. Our current coding policy directs the keyers to record the value of missing responses only when the respondent's comments use the exact wording of the rating scale in describing SSA's service. We believe that this is an acceptable survey research practice because it does not involve a subjective evaluation on the part of the keyer since, in these rare instances, the respondent's comments must mirror the exact wording of the rating scale in describing SSA's service.
Recommendation 4
SSA should incorporate internal controls to calculate final data.
SSA Response
We agree and have already taken steps to incorporate better controls throughout the process.
Recommendation 5
SSA should improve methodology documentation for combining survey results into an annual result.
SSA Response
We disagree. The documentation
of the methodology for combining survey results to produce the performance indicator
has been included in the annual memorandum releasing results to the Agency starting
with FY 2000. In addition, we maintain spreadsheets with the pertinent formulas
in our electronic files.
Overview of the Office
of the Inspector General
Office of Audit
The Office of Audit (OA) conducts comprehensive financial and performance audits of the Social Security Administration's (SSA) programs and makes recommendations to ensure that program objectives are achieved effectively and efficiently. Financial audits, required by the Chief Financial Officers' Act of 1990, assess whether SSA's financial statements fairly present the Agency's financial position, results of operations and cash flow. Performance audits review the economy, efficiency and effectiveness of SSA's programs. OA also conducts short-term management and program evaluations focused on issues of concern to SSA, Congress and the general public. Evaluations often focus on identifying and recommending ways to prevent and minimize program fraud and inefficiency, rather than detecting problems after they occur.
Office of Executive Operations
The Office of Executive
Operations (OEO) supports the Office of the Inspector General (OIG) by providing
information resource management; systems security; and the coordination of budget,
procurement, telecommunications, facilities and equipment, and human resources.
In addition, this office is the focal point for the OIG's strategic planning
function and the development and implementation of performance measures required
by the Government Performance and Results Act. OEO is also responsible for performing
internal reviews to ensure that OIG offices nationwide hold themselves to the
same rigorous standards that we expect from SSA, as well as conducting investigations
of OIG employees, when necessary. Finally, OEO administers OIG's public affairs,
media, and interagency activities, coordinates responses to Congressional requests
for information, and also communicates OIG's planned and current activities
and their results to the Commissioner and Congress.
Office of Investigations
The Office of Investigations (OI) conducts and coordinates investigative activity
related to fraud, waste, abuse, and mismanagement of SSA programs and operations.
This includes wrongdoing by applicants, beneficiaries, contractors, physicians,
interpreters, representative payees, third parties, and by SSA employees in
the performance of their duties. OI also conducts joint investigations with
other Federal, State, and local law enforcement agencies.
Counsel to the Inspector General
The Counsel to the Inspector General provides legal advice and counsel to the
Inspector General on various matters, including: 1) statutes, regulations, legislation,
and policy directives governing the administration of SSA's programs; 2) investigative
procedures and techniques; and 3) legal implications and conclusions to be drawn
from audit and investigative material produced by the OIG. The Counsel's office
also administers the civil monetary penalty program.