• Home
  • Summary Reports
  • Services
    • Assessment Process
    • Comprehensive Program Review
  • Learning Center
    • Blog
    • Podcast
    • Assessment Basic Courses
    • FAQs
    • Assessment Training Videos
    • Web Links
    • Books and Articles
    • Resources
  • About Us
    • Mission, Vision and Goals
    • Staff
    • Standards of Ethical Practice
    • Contact
  • Skip to primary navigation
  • Skip to main content
  • Skip to footer
Student Affairs Planning, Assessment & Research
Texas A&M University

Student Affairs Planning, Assessment & Research

Division of Student Affairs

Menu
Texas A&M University

Student Affairs Planning, Assessment & Research

Division of Student Affairs

Header Right

  • Home
  • Summary Reports
  • Services
    • Assessment Process
    • Comprehensive Program Review
  • Learning Center
    • Blog
    • Podcast
    • Assessment Basic Courses
    • FAQs
    • Assessment Training Videos
    • Web Links
    • Books and Articles
    • Resources
  • About Us
    • Mission, Vision and Goals
    • Staff
    • Standards of Ethical Practice
    • Contact

Assessment

The Role of the Institutional Review Board*

September 2, 2019 by Darby

Every now and then, Student Life Studies’ staff gets asked, “Does this project need to go to the Institutional Review Board (IRB)? Every now and then, Student Life Studies’ staff says, “This project definitely needs to go to the IRB.” So, what exactly is the IRB and why would projects need their approval? Keep in mind that every campus has different processes and procedures, so Texas A&M is different than other institutions.

I’ll hit some of the basics, but to really understand the role of the IRB at Texas A&M, see https://vpr.tamu.edu/human-research-protection-program/. That website will describe the definitions, steps for approval, resources, and training requirements. The IRB is here to protect humans (living subjects) in the research process (a systematic investigation resulting in generalizable knowledge). In research, the investigator gathers identifiable data about people through some sort of intervention or interaction. But, not every data collection activity requires IRB approval. In many cases in the Division of Student Affairs, assessment is completed to improve a specific program or improve student learning in a particular activity. Those data collection functions, although inherently involving interacting with humans, do not have the purpose to create new, generalized knowledge. Alternatively, there are topics/people that Student Life Studies would recommend IRB review. Those typically include sensitive topics or groups (alcohol/drugs, illegal activity, sexual activity, sexual misconduct, minors, cognitively impaired adults, pregnant women, prisoners, etc.) or cases when you know you want to publish the results. I hope you can see where there might be overlap or questions between program improvement data collection and human subjects research.

The IRB offers/requires training for any investigator. If you are just collecting some feedback about your program to make changes for next year, you might think you don’t need to take the IRB training. But, the training provides good information about the ethics of data collection, regardless of whether you are conducting actual research. Texas A&M uses the Collaborative Institutional Training Initiative (CITI) as the online training system. The nice thing is that the training is good for five years before you have to do a refresher course. The course can be accessed at https://rcb.tamu.edu/humans/training/human-subject-protection-training-in-citi .   Although not the most exciting of professional development options, it is still important to student affairs staff who collect any data.

If you need to submit a proposal to the IRB, there are several steps to complete and questions needing answered. This page https://vpr.tamu.edu/human-research-protection-program/approval-process/before-you-submit/,  provides an overview of the documents that you will need. To understand the questions on the application, see the Socio-Behavioral Protocol Template at https://rcb.tamu.edu/humans/toolkit/templates/templates  When you are ready to submit, you log into the iRIS portal at https://iris.tamu.edu/shibboleth-ds/index.html. The instructions at https://vpr.tamu.edu/human-research-protection-program/approval-process/how-to-submit/ walk you through the process. The questions can be confusing because of some of the jargon, so feel free to reach out to the IRB hotline at 845-4969 or the general office number, 458-4067, to find the Division of Student Affairs liaison.

I hope this information gave you an introduction to the IRB with resources you can access for more detail. Feel free to reach out to Student Life Studies for assistance as well. We are always here to help.

 

*The overview provided here is very simplified. Please refer to the IRB or Student Life Studies for more specific information.

Filed Under: Assessment

Reflections on the Student Success in Higher Education Conference

July 2, 2019 by Darby

Last month, I attended the new NASPA Student Success in Higher Education Conference. It combined four conferences in one: Closing the Achievement Gap, Student Financial Wellness, First-Generation Student Success, and Assessment, Persistence and Data Analytics. Because all of those topics intertwine, the conference had something for everyone.

From the assessment standpoint, I continue to be surprised, although I should not be, at the number of people who just got assessment added to their job without any skills or knowledge about assessment. Over 50 people attended the “Assessment 101” pre-conference session to get themselves a foundation. In an assessment culture session, some professionals are at the beginning stages of implementing assessment in their department or division. At the same time, the conversation about student affairs assessment has matured at the conference, so there are sessions that are more advanced, focusing on use of big data, different data collection methodology, and using data to predict at-risk students.

Several sessions addressed student employment from a financial as well as a developmental experience, which I think is going to continue to be a hot topic as more students work during college. Some students, even those that work, struggle with food and housing insecurity. They are a challenging population to assess because they may not be clearly apparent.

Students are complex. Higher education is complex. Student affairs is complex. Data is complex. This conference tried to address some of these issues and how they intersect, so that we can help students be successful in their college life and beyond.

NASPA has not announced where the 2020 conference will be, but I encourage you to consider it as part of your professional development options.

Filed Under: Assessment

What I Learn from Program Reviews

May 1, 2019 by Darby

Recently, I was asked to review a relatively new student affairs assessment office at a different institution using the CAS Standards (https://www.cas.edu/). I have done program reviews before, and they always remind me to think about our own processes and the future. This one was no different.

Student Life Studies has been around for more than 20 years. As one of the first “departments” (more than one person) of student affairs assessment, we learned as we went. There wasn’t a manual or an elder giving sage advice. There was not an association of student affairs assessment professionals. There were no conferences devoted to the topic. We made mistakes along the way and continue to look at our own organization for improvement. I think part of my role as an “expert” is to help other student affairs assessment offices get up to speed quicker than we did a couple of decades ago. There are so many improvements in technology, project management tools, and resources that we did not have back then! At this point, I think learning and improvement can be more rapid than it was when we started.

The department I reviewed already had staff training in place to build capacity among their staff. They have in person group training, individual consultation, and resources available online. Their division staff have gained confidence and competence from the efforts. All of their Division staff have access to Qualtrics, so it is easy for staff to send out surveys (which has its own challenges!). They are well on their way to developing a culture of assessment, especially as more staff are used to assessment.

In the last few years, data has become a hot topic. Who has it? Who wants it? How is it used? How should it be used? How is it protected? This unit is already building what has taken us a while to get in place. As systems become more technologically complex, but also potentially more integrated, the data conversation becomes even more important. It’s a way to decrease silos that have developed over the years if the right people can get together to take action. It gives us a more accurate, complete picture of the student experience.

This department has clear support from the relatively new Vice President of Student Affairs, who saw the importance of creating a multi-person office, rather than having it as a small percentage of one person’s job. Student Life Studies has leadership support as well, but I wonder if DSA staff take it for granted because we have been around for so long. The department I reviewed still has the “new” factor in its favor—staff remember what it was like when the department did not exist. Not many Texas A&M DSA staff were here before 1998.

While I am asked to review other departments based on my experience in Student Life Studies, I always learn something from other places about improvements or innovations we can make here. I hope my advice helps them move forward quickly, but I always appreciate ideas that I get from other offices even if they don’t have the same history.

Filed Under: Assessment

Assessment as Counseling?

April 1, 2019 by Darby

A few weeks ago, in the graduate class I teach, one of the students asked (paraphrasing) if I used counseling skills when working with people on assessment (thanks, Dylan!). I hadn’t ever thought about it that way. I am certainly not a trained and licensed psychologist or counselor like the fabulous staff at Counseling and Psychological Services (http://caps.tamu.edu/), and it’s been many years since a I had a counseling/helping skills class, but I can see a few similarities. The assessment relationship helps people through a reflective and analytical process ultimately to make some sort of improvement. Potentially oversimplifying, the counseling process usually takes people through a process of reflection and thought/behavior change to make some decision/action to improve.

From the very beginning, the assessment professional has to put the client at ease. The client may be nervous about the process, fear the feedback, or unsure of the amount of work it might take. Because staff take ownership in their work, they may have some anxiety about how the program or service reflects on their own performance. The assessment professional can address those concerns by talking about the process, the outcomes, and the good things that assessment can help them with. The unknown can be scary, but it can also been seen as an opportunity.  The more motivated the client is, the easier the assessment process can be. I see that building rapport is something a counselor needs to be good at to get their client comfortable with the process. I certainly would not want a counselor who made me feel more anxious or stupid.

Counselors and assessment professionals ask good questions. That’s part of the job. They know how to push without being pushy, how to challenge without causing shut down, and how to get clients talking. When clients come in for assessment help, they may be lost as where to start, what to ask, and what to do with results. The assessment professional can ask probing questions to get to the key issues, the background and priorities, the desired outcomes, and the risks of change. That also sounds like a counselor strategy.

Counselors maintain impartiality. The assessment professional is also a neutral third party, in that they don’t have a stake in the outcomes of most assessments. That allows them to see things with some clarity, without pre-conceived notions, and with an open mind for the future (and yes, we all come with some biases related to our experiences; a good professional can recognize and address them as needed). Because the client is immersed in the program/service/activity, they may have a hard time stepping back and looking at things objectively.

Related to that, the assessment professional may be able to see more options based on knowing about other assessments, how others have used assessment, and what is happening in other environments. While the client is immersed in their program, the assessment professional can see across other assessment and data available that could inform the assessment process or using results.  Just like a counselor, the assessment professional can look at a topic from several angles and use several strategies to help the client.

Most comforting, the counselor and the assessment professional provide support to the client. Sometimes it’s being a cheerleader, sometimes a coach, and sometimes an empathetic listening ear. I was going to say a shoulder to cry on, but, to paraphrase a famous movie quote, “there is no crying in assessment.” (Okay, that’s not really true, but I hope there is less crying in our office than in Counseling and Psychological Services.)

I hope this month’s blog has given you some perspective about the relational and support side of being an assessment professional. I would love to hear your perspective.

Filed Under: Assessment

A Response to “Why Do I Assess?”

March 1, 2019 by Darby

In late January, Linda Suskie, a well-known higher education assessment author and consultant, published a blog post January 31,2019, “Why Do I Assess?”  Not only did the post garner comments on her blogsite, it created a whirlwind of conversation on a higher education assessment listserv. (By the way, I’m a huge fan of Linda’s for her practical, down to earth perspective and her clear writing style.)

I think everyone should take a few reflective moments to think about why they assess (yes, I think everyone should assess something at some point). Because it’s required? Out of curiosity? For program improvement? To be a better educator? To know what learning occurred? To create an educated society? Each person may have a unique set of reasons, which could be a great conversation to have among peers, and we need to understand that about each other.

Here are a few of my reasons to assess.

First, I love learning. That statement goes in two directions. One, I like to know more about the world around me. When I describe my job, I tell people that I know a little bit about a lot of things. Not surprisingly, two of my Top 5 Strengths are Input and Learner. Over the years, I think I have become more curious and been able to ask deeper questions. Two, I am excited about the learning process among college students particularly outside of the classroom. Learning happens everywhere. I am motivated when I see students have an “aha” moment or describe how they were changed by a co-curricular experience. When students transfer learning to other situations or articulate how a developed skill can be used in their first job after college, I feel good about the work we do in higher education and student affairs.

Second, improvement is important (one of my other Top 5 Strengths is Achiever). How do we know how well we do if we don’t assess? I truly believe that we come to work every day to do the best we can for the stakeholders we serve (students, staff, etc.). We might know anecdotally that we are positively impacting students, but it helps to have more than that to support our assertion. And, in case you haven’t noticed, students and our environment change over time. A program created 50 years ago (or even 10 years ago) may not serve the students of today. We need to know that, so we can adapt. Why would you want to continue to do something that takes time, energy, and money, but doesn’t meet the needs of your audience?

Third, it’s just fun for me. Not only do I like the process of assessment, I like the product of assessment. I have fun sitting down with people to brainstorm what they want to know, formulating and implementing a plan, and figuring out how to share and use the results. I think it’s fun to give back to the profession through teaching and writing. I’m very fortunate to be in a role that fits my personality, interests, and skills. I enjoy coming to work and helping people answer their burning questions.

So, those are my thoughts about what I assess. What are your reasons to assess? I would love to hear them.

Filed Under: Assessment

Assessing Everything All the Time

February 2, 2019 by Darby

A student affairs assessment colleague at another institution recently reached out to me with a challenge at her institution. Her problem is not lack of staff motivation to assess, it’s actually the opposite. They are trying to assess every activity all the time, particularly using surveys as the data collection method. Of course, that has led to survey fatigue, a common ailment when staff catch the assessment bug.

 

It’s not uncommon for the pendulum to swing from no assessment, or outright antipathy about it, to assessing every…single…thing…all…of…the…time. But, not only is that unnecessary, it leads to some unintended consequences, including participant fatigue, staff fatigue, information overload, inability to make a decision or changing direction too frequently, and lack of focus on what’s truly important. How do you overcome that? Here are a few suggestions.

 

–Focus on what’s important to the program, the unit, the department, the division, and the institution. The alignment is important and helps you understand how what you do fits into the larger context. What are the goals and strategic plans that guide practice? If your program or services do not align with guiding documents, you might need to assess if you should be doing it at all.

 

–Develop and revisit outcomes frequently. Similar to aligning within the organizational structure above, looking at the outcomes helps you stay internally focused on what is important. If you have developed student learning outcomes, how will you know students have actually learned what you wanted them to learn? If you are assessing program or process outcomes, how are you keeping track of those to know you are effective in your practice?

 

–Vary your assessment methods. Surveys are overused in the student affairs assessment. Perhaps you have the ability to set up focus groups or interviews as a follow up to survey responses or you focus on individual experiences and perceptions. If you are promoting student learning with a small group, you could create rubrics for self and other evaluation. For longer term, deeper experiences, participants could journal or reflect on photos they have taken.

 

–Create a calendar for both short term and long terms assessment practices. In the short term, look at the number of programs/activities you do, the academic calendar, and planning and reporting timelines. If you present a canned program 10 times a semester, do you really need to assess it 10 times, or can you take a sample of them? Maybe you pick every second or third program in the first semester to determine what changes need to be made for the following semester. If you are looking at usage of services, maybe you pick one or two “typical” weeks in a semester to review, rather than 15 weeks. In the long run, you can create a calendar of important topics you want to know about and how you plan to assess them. Maybe one year focuses on satisfaction (using a survey), while the next year focuses on student learning (using rubrics or exit interviews), and the year after that focuses on tracking usage (using observation and preexisting data). You assessment will be ongoing, but will also give you focus areas of improvement each year.

 

–Be brief. See my previous blog, “You Only Get Five Questions.”  People are much more likely to respond to a few quick questions than a long, involved survey. Focus on what your NEED to know, not what you just want to know.  Besides, you can’t address 100 things in a year, so you don’t need to ask 100 questions on a survey. Moreover, if you have asked the same questions for several iterations, the answers have been (acceptably) consistent, and you have no plans to change that area, STOP asking it about it for a while. You already know the answer.

 

–If you haven’t used past data for change, don’t reassess yet. Why would you think the answer would be any different? Change and improvement take time to implement, especially for large scale changes. Keston Fulcher and his colleagues at James Madison University, wrote a great article, “A Model for Learning Improvement: Weigh Pig, Feed Pig, Weigh Pig.”  If you assess something (weigh pig) without taking action (feed pig) before assessing again (weigh pig), you will likely get the same results. You can assess too frequently, which becomes a waste of time and resources, for both you and the participants.

 

I hope this gives you some ideas about just saying no to over assessing. You need time to do the great things you do for students and staff; you don’t need to assess everything all the time. Take that advice from someone who loves assessment and works with staff to do it well every day!

Filed Under: Assessment, Planning

Increasing Response Rates

December 5, 2018 by Darby

One of the most common questions we get in Student Life Studies is how to increase response rate, especially for electronic surveys. Students are over-surveyed, so just because you can technically send out a survey to 65,000 students doesn’t mean you should (Hint: you should rarely, if ever, send out a survey to your whole population—Just Don’t Do It!). There is a science, and a bit of luck, that goes along with increasing response rates. As with so many aspects of assessment, you must keep your audience in mind every step of the way. Here are a few suggestions based on our staff experiences, as well as Dillman, Smyth and Christina (2009).

1. Make the survey interesting, relevant, and brief. No one wants to take a long survey that doesn’t matter to them. We found that the response rate of surveys sent to a particular group (organizations, participants in a specific program, etc.) responded more than those sent a random sample survey.

2. Make the cover email interesting, relevant, and brief. You need to hook potential respondents in, even before they might click on a link. Have the email come from someone they know, make the subject line engaging, and explain the purpose of the survey and the importance of their participation. If you can, provide examples of how you have used their feedback in the past.

3. Set a deadline. If you are like me, you might put off doing something because it doesn’t have a clear deadline to establish any urgency. Most surveys stay open about 7-14 days. You rarely need to keep a survey open longer than that.

4. For students, send surveys early in the semester, on Mondays or Thursdays, and in the afternoon. A few years ago, we looked at response rates based on months, days of initial invitation, and time of data of invitation. It was eye opening: timing matters. Also, know what other large surveys are going out around the same time, so you don’t impede each other’s efforts.

5. If you have the ability, you might want to pre-notify you potential respondents that a survey is coming to them. Then, people anticipate getting the survey and mentally plan time to take it. Notification can be email, word of mouth, social media, mail, flyers, etc.

6. Plan for 2-3 reminders with different messages. Each time a reminder is sent, you will get a spike in responses. The initial email will always garner to most responses (usually in the first 24 hours), then each day will decrease until you send a reminder. Then the response rate will jump a little, and the pattern will be the same each reminder. There is a fine balance between sending reminders and making potential respondents annoyed.

7. Incentives have mixed reviews on impacting response rate. You have to balance the cost of incentives and administering them with the potential benefit of increased response. You also have to decide whether every respondent gets something small or a few people will win something large. We have not found a significant impact on response rate using incentives. See #6 for an alternative.

8. Say please and thank you, just like your mom taught you. Ask for advice or help and show positive regard for people’s time and effort. Personalize the communication and make the task seem important. In the email and on the survey, be sure to thank people for their input.

9. Ensure confidentiality (to the extent you can). People will be more likely to be honest when they trust you to protect their information.

10. Ask Student Life Studies to help you! We have expertise and practice in this area and can guide you through the process.

Following these tips will help you increase your response rate. Even when the response rate is lower than you want, there are things we can look at to help you understand whether your data is going to be useful/representative of your population. Plan, Plan, Plan!

Filed Under: Assessment

Assessment vs. Research: What’s the Difference?

November 1, 2018 by Darby

You may have heard the terms “assessment” and “research” used interchangeably. Are they really the same thing? Does it matter? (And that doesn’t even include throwing “evaluation” into the mix!) There have even been recent debates among professionals about it. (<a href=”http://www.presence.io/blog/assessment-and-research-are-different-things-and-thats-okay/”>http://www.presence.io/blog/assessment-and-research-are-different-things-and-thats-okay/</a>, <a href=”https://www.insidehighered.com/views/2016/11/21/how-assessment-falls-significantly-short-valid-research-essay”>https://www.insidehighered.com/views/2016/11/21/how-assessment-falls-significantly-short-valid-research-essay</a>, <a href=”https://onlinelibrary.wiley.com/doi/abs/10.1002/abc.21273″>https://onlinelibrary.wiley.com/doi/abs/10.1002/abc.21273</a> )

In my opinion, assessment and research have a lot in common. They are about collecting data to learn something, they use similar data collection methodologies (qualitative and quantitative), they require knowledge and practice to be effective, and they are important to student affairs and higher education. There are expectations of good practice in both areas.

On the other hand, there are some key differences. The purpose of research is to create generalizable knowledge, that is, to be able to make credible statements about groups of people beyond one campus. It might be about first year college students, new professionals in student affairs, or college graduates in STEM fields. Research may also be used to develop new theories or test hypotheses. Assessment is typically confined to one program, one campus, or one group. In that case, the purpose is to collect information for improvement to that particular area of interest. Assessment rarely would set up an experimental design to test a hypothesis. The results are not meant to apply to a broader area, but they are key to decision making. Assessment can provide reasonably accurate information to the people who need it, in a complex, changing environment.

The timing of research and assessment may differ. Research may have more flexibility in the time it takes for data collection because it may not be tied to one particular program, service, or experience that will change. Alternatively, assessment may be time bound, because the information is being collected about a particular program or service, so changes can be implemented. It may be an event that occurs on an annual basis, information is needed for a budget request, or data needs to be provided for an annual report.

The expectations of response rate may also be different. Of course, everyone wants a high response rate that reflects the population of interest. Realistically, though, that may not happen. In research, there may be more effort and resources to recruit respondents over a longer time or use already collected large data sets. There may be effort to determine if late responders were similar to early responders or if more recruitment needs to happen. In assessment, partially because of the time-bound nature, and the over-assessment of college students, staff may have to settle for the response rate they get and decide if the results are credible.

The audience may also differ. Ideally, all professionals should be keeping up with the literature in their field based on sound research. Research results are published in journals for other researchers to see and use. More narrow, though, assessment provides (hopefully) useful information to decision makers and practitioners about their particular area.  In the big picture, assessment results can inform research questions and vice versa.

Research typically requires Institutional Review Board (IRB) approval, before collecting data from “human subjects.” That board wants to ensure that people are not harmed and appropriate processes are followed. Because of the narrow focus, and usually low risk nature, assessment is typically excused from the IRB process.

All in all, both assessment and research belong in student affairs and higher education. They are important to individual campuses and departments. They just may look a little different in the structure and use. Practitioners need to access both to be the best they can be.

Filed Under: Assessment

Is Assessment a Four Letter Word?

October 1, 2018 by Darby

A few weeks ago, there was a thread on a couple of listservs about the use of the word “assessment,” which for some people has a negative connotation. Obviously, I like the word assessment, but I understand how some people may be scared or turned off by it.

Listserv members offered up a variety of alternatives that might be more palatable for the folks on their campuses:
• Evidence
• Improvement
• Effectiveness
• Learning
• Storytelling
• Evaluation
• Research
• Problem Solving
• Inquiry
• Quality Assurance

As you can see, there were a number of suggestions to replace the word. All of those are good words, and they have different connotations, depending on how precise you want to be about the definitions. You can see that “accountability” is not on that list—I think that is another word that creates negative associations with a potentially frustrating experience.

At the same time, there a number of people who said that we should continue to use the word assessment and change the attitude toward it (reclaim the word). Several people gave examples of how their colleagues had one bad experience, so they generalize that to all future endeavors. It becomes hard to change someone’s mind when they have negative associations with a word or were somehow punished, felt like they wasted time, just did busy work, etc. This line of thinking focused on the activities of assessment, rather than the language.

Here’s my thought: I don’t really care what you call it. My vision is that you are doing something to know that you have improved the lives and experiences of college students. You cannot continue to implement your program/service/experience/course the same way you did a decade ago and think that the students are getting the same benefit from it. That something should have some structure and system to it, but it doesn’t need to be your dissertation. It can be quantitative or qualitative. It can be a sample (of people, timeframes, services), rather than all things all the time. It can be fun. (Really, it can be fun.)

It’s about how you know what you know, so that you can continue to do better and do the best for the students you serve. It’s about documenting (even in a creative way) to tell others about what you are doing and what you know. It’s about thoughtfully taking action based on something other than your gut feeling.

So, is assessment a bad word? I’d love to hear your perspective.

Filed Under: Assessment

Reflecting on the Year

May 1, 2018 by Darby

It’s May. Time to reflect back on a (hopefully) successful academic year. When things start to quiet down, it’s a great time to reflect on your assessment before getting wrapped up in the planning for the next year. We have to set aside time for it, or else we get sucked into putting out the fires and handling the immediate (which is not always the most important).

What does reflection look like? It’s really a process of dialoging with oneself (or with others) to gain additional perspectives, think about our values and beliefs, and put that in the larger context in which we operate. This gives us clarity to change (Jay & Johnson, 2002).

How do you start? One of the easiest “formulas” I have seen for reflection is the “What? So What? Now What?” process. The “What?” is a description of what happened, the “So What?” examines the significance of what happened in context, and the “Now What?” looks forward to what you will do differently.

Let’s look at a simple example.

What? You did a pre/during/post rubric with the executive officers in the organization you advise to gauge their leadership skills growth. Students used the rubric to self-evaluate, and you also completed a rubric on each student at the end of September, beginning of January, and April. The rubric represented several of the institution’s undergraduate learning outcomes. Most students indicated growth in the areas of communication, critical thinking, and personal responsibility (ethical leadership). Students did not grow in the area of working collaboratively (teamwork, considering different points of view, supporting a shared goal). Not only did the students not score highly on the rubric in April, you also observed their group conflict and lack of cohesion to fulfill the group’s mission.

So What? This is significant because students need to be able to work together now (in classes and co-curricular activities) and in the future when they have a career where they will work with others. The organization, its audience, and its members suffer when the executive officers are not on the same page and working together. Because this is also an expectation from the university, you know students should be developing this skill before they leave the university.

Now What? As you think about the new officers who have been elected for the fall, you know you might need to be a little more engaged with them at the beginning. Working with the president-elect, you plan a retreat that will focus on group development, understanding different viewpoints, and setting organizational goals. You will also continue to use the rubric to evaluate student performance and observation to evaluate organizational performance.

Block some time on your calendar to review the results of your formal and informal assessment(s) from the past year. Maybe make an appointment with a colleague to talk through your reflections. Set a timeline to implement potential changes in the new academic year. If you don’t do it, you will be doing everyone, including yourself, a disservice.

Reference

Jay, J. K., & Johnson, K. L. (2002). Capturing complexity: A typology of reflective practice for teacher education. Teaching and Teacher Education, 18, 73-85.

Filed Under: Assessment

  • Previous Page
  • 1
  • 2
  • 3
  • You're on page 4
  • 5
  • Next Page

Site Footer

222 Koldus Student Services Building
Texas A&M University
College Station, TX 77843-1254

P: 979.862.5624
F: 979.862.5640

[email protected]

Who we are

Student Affairs Planning, Assessment & Research provides quality assessment services, resources and assessment training for departments of the Texas A&M University Division of Student Affairs and Student Organizations.

  • Site Policies
  • Risk, Fraud & Misconduct Hotline
  • State of Texas
  • Statewide Search
  • State Link Policy
  • Open Records
  • Texas Veterans Portal
  • Texas CREWS (PDF)

Copyright 2025 • Student Affairs Planning, Assessment & Research | Division of Student Affairs • All Rights Reserved. • Hosted by Division of Student Affairs Department of IT