Last Updated on June 27, 2022 by Laura Turner
Part 4: How Competencies Are Evaluated
(Part of this article is based on another article I have published: “Competency-based holistic evaluation of prehealth applicants” (The Advisor [NAAHP publication] 29(2): 30-36, 2009).)
If you’ve ever tried applying for a job for the government, you will often be asked by USAJobs.gov to self-assess your competency development as follows:
A – Lacks education, training or experience in performing this task
B – Has education/training in performing task, not yet performed on job
C – Performed this task on the job while monitored by supervisor or manager
D – Independently performed this task with minimal supervision or oversight
E – Supervised performance/trained others/consulted as expert for this task
While there is not a universal standard for evaluating everyone on their competencies, it is clear that with the movement to identify competencies in interprofessional health care teams, a rating scale like this will be the ruler against all trainees will be measured. This has already extended to evaluating applicants for health professional programs.
The AAMC/HHMI Scientific Foundations for Future Physicians report notes four key steps to proper implementation of competencies for future curricular design reforms: identification of competencies, determining the components of the competency and expected performance levels, assessment of the competency, and overall assessment of the process (p.36).
In previous articles (Part 1, Part 2, and Part 3), I have identified competency domains that bound all individuals seeking health professional training and how those domains were viewed by pre-health advisees preparing an application compared to their evaluators. In this article, I will describe how our evaluation system attempts to follow the third step, assessment of the competency. I will detail the data we collect in our pre-application process to evaluate the performance level of each advisee and how the evidence influences our institutional evaluation for each applicant.
Articulating Competencies
“Competency is the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served.” (JAMA 287:226-235, 2002; cited by AAMC/HHMI Scientific Foundations for Future Physicians, 2009).
In a process where thousands of applications are being filed and reviewed, individuals who develop the ability to communicate and articulate their own competencies in the application process will clearly have an advantage. In the evaluation system developed at George Mason University, each pre-applicant must rate himself or herself on the development of his/her competencies and have these findings compared against their solicited references’ and the independent interviewers’ ratings. In addition to the interview feedback, the committee letter also evaluates several essays (of unrestricted length) that probe each applicant’s reflection or experience that justifies their competency self-assessments. Usually there is one essay that focuses mainly on one or two competency domains, but often a respondent’s essay will strike multiple domains at once. Furthermore, the content of the solicited letters of recommendation and interview feedback are reviewed for specific evidence that justifies the referee’s assessment of the applicant.
With this in mind, I have a fictional letter of recommendation from a professor supporting an applicant to show how our rubric standards are applied. Professor Smith recommends the applicant, citing the following evidence: “achieved the third highest grade in the class,” “asks interesting questions,” “often helps other students in lab”, and “writes well-organized reports.” In contrast, Professor Jones comments, “even though the student would earn an A+ in my class, I recall her questions in class revealed a sincere interest in the topic, especially when she asked how what I was teaching was related to a breaking news story about an invention…”. In the holistic evaluation, I get a better sense of this individual’s academic foundation as being more likely “proficient” (An individual has completed sufficient training to reliably reproduce a core set of knowledge and skills, but must receive further training when confronted with situations where the training is applied – see The Competency Manifesto: Part 3 for additional details) for Professor Smith and “confident” (An individual is competent in an above-average set of knowledge and skills, and demonstrates appropriate confidence in adapting to new situations that test the skill set – see The Competency Manifesto: Part 3 for additional details) for Professor Jones.
Measurement and comparison of competencies is made by reviewing the content of all the written materials submitted to our committee, including those based on oral communication. The results of the 360-degree assessment of the applicant’s competencies are viewed but are not automatically factored to make the final recommendation. While our competency assessment tends to be based on a threshold standard, the assigned recommendations for applicants are also normative standards, to compare each applicant’s competencies against what is considered to be acceptable for matriculants. There are some limitations with determining “acceptable” as what our committee may see as a highly rated candidate at one school may be average at another; this is why the competency domain of institutional fit cannot be addressed or factored in with this process.
Difference in Acceptable Competency Level in Admissions
Not all health professions programs require institutional evaluations when a process like mine is available to an applicant, so the best that can be done is showcase differences in applicants relative to the central applications for medicine and dentistry.
The most annoying question in pre-health advising is “how many students get in?” With competency-based admissions, the question should better be focused on “how competent must I be to get in?” In the last admissions cycle (entering 2010), data are sorted by the committee recommendation level versus how far the applicant progressed through the admissions cycle (received interview, waitlisted, or accepted). Overall, one or more applicants who received a rating of “recommend” or above were accepted into one of the health professional programs to which they applied.
As one would expect, the percentage of individuals getting accepted increases as the recommendation rating becomes more favorable. However, if one separates applicants between those applying to allopathic medical schools (AMCAS) and those applying to dental schools (AADSAS), one could see interesting differences. First, for the dental school applicants, there seemed to be a remarkable difference in the acceptance percentage based on the recommendation rating (100% enthusiastic, 43% strong) even though their average application GPA’s (3.61 vs. 3.57) and DAT scores were very similar. This suggests that the competency-based recommendation rating correlates well with admissions decisions regarding characteristics sought by the schools among applicants.
In contrast, the competitiveness of allopathic medical school applicants seems to attract individuals with greater development of competencies. Whereas dental schools were satisfied accepting 100% of applicants with “enthusiastic” ratings, medical schools only accepted 27% of applicants from our institution with “enthusiastic” ratings. Indeed, the select few allopathic applicants with “highly recommended” ratings were selected at a higher proportion compared to the other groups (75%), in spite of the fact that the average GPA of the highly recommended group was lower than the “enthusiastic” group (3.58 vs. 3.73). Traditionally, our advisees have not fared well on the MCAT, so it appears that the comparable percentage acceptances among our “enthusiastic”, “strong,” and “confident” recommended groups might be related to the MCAT scores each group compiled (26.8 “highly”, 25.1 “enthusiastic”, 26.5 “strong”, 24.9 “confident”). More interesting were the comparable statistics for applicants who were accepted (GPA 3.57, MCAT 27.4) compared to those waitlisted (3.64, 27.7). In general, the admissions process for allopathic medicine seems to place value in a threshold expectation of performance on the MCAT and an applicant’s competencies as being clearly superior and maturely developed.
Future Signs of Admissions Trends
It must be mentioned that one cannot generically characterize all medical school admissions processes the way that the analysis tempts. Each individual school assesses these competencies and their own institutional fit separately, so having a highly desirable pre-health institutional evaluation rating is no guarantee for admission to the health professional school of one’s dreams. In addition, the evaluation ratings are determined months before an applicant is often interviewed at a school, so a competency may continue to develop throughout the months of the admissions process.
What it does suggest is that accurate self-assessment, development, communication, and demonstration of one’s pre-professional competencies in a more holistic admissions process may be a better indicator of one’s chances for admission to a health professional program. Furthermore, the threshold for competency development may be different for every major type of health professional program one considers for a career, so an honest review of the competencies expected for individual professions in a health care team would be valuable in helping thousands of pre-health students (and professional students) to find satisfying future careers and specialties in the health care workforce. More importantly, a competency-based admissions process further places weight on life experience, maturity, and personal growth; instead of trying to fit preparation for a professional career in a four-year box, it should become personally more acceptable to realize that taking more time to explore one’s passions and interests may strengthen the weaker competencies one has at 21 years of age.
Table 1. EY2010 admissions final decisions and average GPA by committee recommendation.
Table 2. EY2010 AADSAS admissions final decisions
Table 3. EY2010 AMCAS admissions final decisions
Emil Chuck, Ph.D., is Director of Advising Services for the Health Professional Student Association. He brings over 15 years of experience as a health professions advisor and an admissions professional for medical, dental, and other health professions programs. In this role for HPSA, he looks forward to continuing to play a role for the next generation of diverse healthcare providers to gain confidence in themselves and to be successful members of the interprofessional healthcare community.
Previously, he served as Director of Admissions and Recruitment at Rosalind Franklin University of Medicine and Science, Director of Admissions at the School of Dental Medicine at Case Western Reserve University, and as a Pre-Health Professions Advisor at George Mason University.
Dr. Chuck is an expert on admissions, has been quoted by the Association of American Medical Colleges (AAMC), and has volunteered as a workshop facilitator on holistic admissions for the American Dental Education Association (ADEA). He has also contributed to the essay collection The Perfect Doctor by Pager Publications and has developed competency-based rubrics supporting holistic review.
I am so thankful that my school did not have a pre-health committee. Look at the number of data points you have. With that small of a data set you might as well burn their collective applications and interpret the smoke signals. Leave your poor students alone! They already have to go through the amcas app process.
Dear Emil Chuck,
To me what you’ve concluded is that each school prefers their own unique student and that there is no type of student that is most “competent”.
Your data is terribly bleak. I find more information on predents.com and the opposing medical version than I do with what you have just given me. I’m not blaming you though; it’s just that George Mason, as I remember it, was one of my safety schools in high school and did not expect to see as many competitive applicants than I do with UVA, W&M, or VT. So there’s the problem, not you though.
Now if you could please explain to me this. Why exactly should I, if my school had a pre-health committee, be evaluated by a couple strangers with no relate-able medical or dental school education. What qualifications do they have to judge whether I will be suitable for a certain profession? Why make it harder than it already is to apply to medical and dental school by having your committee have what seems to be roadblocks/unnecessary hoops that we must jump over to reach our goals? I’m sorry but please enlighten me to see what positive impact your committee does because I seriously believe that the medical and dental school admissions committee will place little emphasis on your evaluations compared to our G.P.A., MCAT/DAT scores, personal essay, recommendations, shadowing hours, extracurriculars, and interview. (Is there really a big difference between an applicant who has been reviewed by your staff and one who hasn’t?)
“First, for the dental school applicants, there seemed to be a remarkable difference in the acceptance percentage based on the recommendation rating (100% enthusiastic, 43% strong) even though their average application GPA’s (3.61 vs. 3.57) and DAT scores were very similar. This suggests that the competency-based recommendation rating correlates well with admissions decisions regarding characteristics sought by the schools among applicants.”
Why did you choose to leave out the DAT scores when you’ve already provided GPA and recommendation evaluation? Maybe the DAT scores were way off and that’s why they got rejected? Perhaps the 3.61 applicant also took more upper level science courses while the 3.57 took more mediocre classes?
“There are some limitations with determining “acceptable” as what our committee may see as a highly rated candidate at one school may be average at another; this is why the competency domain of institutional fit cannot be addressed or factored in with this process.”
I just feel like you’re over complicating things. I’m a student at Mason, and believe me, a lot of Pre-meds believe that it’s a chore to go through Pre-Health because of all the rules and regulations! They hurt us, not help us. Most people get discouraged and do not apply because of this.
And of course, endless statistics. While it’s great to publish all these numbers, I think GMU Pre-Health needs to focus on helping their students, and that should be their main and sole objective.
And I’d have to second what everyone else said, it’s sort of self explanatory, Great Students have a higher chance of acceptance than poorer students. It’s easy to discern a Great Student from a Poor one.
Please come back to George Mason University! We need you this coming year!!!!!!!!!!!!!!!!!!!!!!!!!!
Dr. Chuck’s leaving Mason?
Without Emil Chuck, I don’t think I’d be a medical student. Someone who will speak the truth about the application process is needed at every undergrad, not just Mason, but Harvard and Tech, etc. It’s too easy to underestimate/overestimate in the wierd world of AMCAS. Thanks Emil!
Very informative article. Thank you!