Thursday, July 12, 2007

Gypsy fiddlers and one step backwards.

Did I mention the gypsy fiddlers last night? Last night after I forgot to cancel the last of my many planned social events to again write, I ended up having to go, no prob after all cause I thought I was ahead enough, almost done with a first narrative draft. And it was wonderful, I went to the new Frank Gherry bandshell in Millenium park, got a seat right down front with one of my favorite people, the 82 yr old Raffle Rose, and settled in to watch the symphony. Now, it's been 20 years since I've gone to one of those, so I was all trying to fit in but not need to pretend to like it, the guests were gypsy fiddler kings, a small band of them playing with the symphony. Now that's the kind of symphony I can get into!

Now, it's Thu afternoon, I've volunteered to put in my full draft of a CBPR grant for review tonite, but unfortunately, I've gotten even more good feedback on my current draft - of the change buckets variety. Sigh. So, I'm tired but I'm mustering for one more big writing push.

And today I heard from one of the faculty what she really walked away from this whole shindig with.... that CBPR is not a theory, it's not an aim/goal, it's not an outcome, it's a methodological enhancement only, it should be nicely tucked into the framework of your proposal, but not spotlighted. (paraphrased) I'll ask more faculty later what they got out of it.

It is nice for us to have these experts on tap for days at a time, ready to review yr stuff in a jiffy, with worlds of combined experience between them.

Equity in CBPR budgets

First, let me say one thing... I have actually not taken notes on many of the great presentations on the community-interaction elements of this institute, I think that's because many of these factors are more familiar territory to me, and some very intuitive, but just to be good, I'll review some of the main components here:
  • do not underestimate how long and how much of an investment it'll take for a researcher to build trust with the community partners (especially if you are out of community!)
  • This is a business relationship, not a marriage, do not rely on good feelings, rely on following all good business principals of mutual respect, documented expectations, money for work, joint tasks and timelines, etc.
  • Researchers should expect to put more than the contracted into the community partner, help them with their capital campaign, etc.
  • Face it, there's usually a fair amount of in-kind given by both sides.
  • Now this is just my own deduction, but after hearing some amazing stories of CBPR relationships I think one of the smartest things either the community or researcher can do is make sure they partner with a hotshot on the other side. You will get the best performance, the most bestest results if you invest in a person/organization who has proven to be really competent, or even outstanding if you can find them.
  • Respect the fact that answers for many significant problems exist in that community knowledge. Sometimes ideas about methods to get there may exist more in the researcher side, tho not as certainly.
  • CBPR applications at NIH get scored in part on their CBPR infrastructure and partnership
Now onto a few comments on CBPR budgets and equity between both sides.
  • make sure you adequately compensate all activities of your community partners, presentation travel, etc.
  • make sure you pay indirects to your community partner
  • universities can only charge indirect on first $25k of a subcontract, so they're not always charging double indirect for that item.
  • get a focused subcontract for the community deliverables, use eval measures on both sides to make sure you keep to performance. Likewise watch to make sure your researchers all perform, otherwise you'll sour your community partnership.
  • make sure your subcontract to the community group pays for a staff position, not just unassigned deliverables
OK, I've gotta get back to writing, because good news is I think I have enough done to submit my draft application as one of the 3 that'll get big review from everyone tomorrow! Yay that, and weeha, wasn't sure that could happen. On the other hand, I've just gotten feedback that my methods section sucks (well politer language was used).... so I gotta haul ass before it's due at 8 pm tonite.

Whew, the titans heat up.

Wow, in case we were napping this am... or alternately writing like mad, a presentation this morning on the methods for process evaluation got our back row of NIH folk fired up more than anything before. As my buddy next to me says, it's always nice to be a fly on the wall as big muckety mucks in NIAAA, NIDA, CSR and others debate something amongst themselves. But alas, it was perhaps more an appearance than true substantive differences. Regardless, let me summarize the important points as I interpreted them.

  • basic point -- NIH does not fund evaluation research, if you don't have an outcome to measure, don't come knocking.
  • process evaluation is most often used to measure fidelity to an planned intervention, and you aren't going to get thru CSR (the folk that score your proposal) without it (altho another from NIAAA said it was less important to him to do more process eval on interventions that were well baked and documented)
  • but it can be used in a non-intervention setting to track your aims... like measuring satisfaction and faith in the process with yr CBPR folk, like measuring the level of project/topic education by your cbpr folk, like comparing the number/demographics of actual people enrolled to the potential enrollment population
  • now back to intervention, NIDA person said, as they start to emphasize interventions that have longer interaction periods, the value of process evaluation increases, because the impact of the intervention can be affected by greater number of factors, which then need to be monitored through process eval.
Another side note: whenever you have a survey of participants... always ask at the end if you can contact them again!

And yet another note: an often overlooked item in NIH applications is a data safety and monitoring plan, in the words of Dr. Bill - don't forget it!

Now, it's time for a break and in the words of our esteemed co-chair... "I'm sure there's another thousand calorie snack out there from wolfgang puck"

--- Break + 2k calories of muffins later ---

Another note I forgot from before: new applicants to NIH now can get expedited comments and then are allowed to resubmit in next cycle, which could be as soon as one month away.

Now, as many of you know but some may not... NIH grants can be submitted 3x, the process can easily take 2 yrs. At each time, your application can be returned as 1. not scored (but with review comments) 2. scored but not well enough to be funded or 3. scored well and funded. Scoring happens over in Dr. Bill's shop, the Office of Sci Review (OSR). Funding happens according to "paylines" decided by the institutes that take your grant into their "portfolio". Each PA usually has an institutional portfolio attached (and it's one more reason to start your process by asking the grant person at the institution about what's important to them, because there are some portfolios that are full for certain PAs, so it's your waste if you apply for them, they won't get funded).

Also, if I can remind folk: I might get some of this wrong, so don't live by it!

Now in the words of Kate Greeno: She expects to get funded, but right now with paylines so low, she does not expect to get scored on the first submission, many seasoned funded researchers do not these days. She also plans that each application will need full 3 submissions to get funded. (so plan early!)

Now, another note of info from last night. What is the difference between all those different funding announcement types at NIH?

The name for all types of announcements is FOA now, Funding Opportunity Announcements.
Remember at NIH, you never are really working on a "proposal" it's almost always an "application".

PA = program announcement, most common, regular due dates, can stand for years. This is NIH's routine way to say "we want to see applications that are about this". Some are very very broad, since smaller grants (R03, R21) have to reply to a specific PA and they wouldn't want to exclude good ideas by having all PAs be narrow.
PAR = PA + "receipt referral review" PA plus special due dates. often LOI should come first.
RFA = one time shot with $ attached, often used to fund a small number of studies to kickstart a different area of interest. Interestingly, if you apply for this and don't get it you can reapply under a regular PA, giving you kinda an extra shot at it.
PAS = like an RFA that lasts for many years, there are set aside funds attached to this as well, but it's longer term than a PA.

Wednesday, July 11, 2007

Wed 2 - Qual sampling & Retention

Sampling, Recruitment and Retention -- Qual part by Deborah Padgett

Hey y'all, we've had some quantitative presentations that didn't apply to my work as much now, so I didn't take notes, but now, we've got Deborah Padgett talking, and she wrote a couple of books about qualitative methods and I'm taking notes so let me take them here.

She starts by reminding us that most all research is with a non-probability sampling.

Sample size
Vigorous discussion here about how you prestate sample size when the goal is to get saturation or redundancy, which is an iterative phenomena you can't predict beforehand. You need to include a sample size table in your grant (but I don't see that in the directions, I'll ask more later). And you need to tell your IRB sample size. And I've heard unofficially from an AJPH editor that they really hoped any incoming qualitative studies had at least 25 folk. Deborah says it's understood that yr sample might be larger if you are doing grounded theory, and less if you are doing phenomenological theory, & if your journal doesn't like your sample size, switch, because a sample of 10 can really be valuable when conducted well. But consensus is to project sample size, even if you then undershoot. In the immortal words of one of the co-chairs here... "You always have to say a number, then we never hit it, but you say it."

Main types of Purposive sampling
  • extreme or deviant case sampling (seeking the outliers)
  • intensity sampling (similar but doesn't go to the extreme outliers)
  • maximum variation sampling (sampling for heterogeneity)
  • homogeneous sampling (opposite of above)
  • typical case sampling (getting individuals in the middle of the bell curve)
  • criterion case sampling (setting criteria and using folk who meet it, also with Nominations as a subset where folk refer others who meet criteria then you pick, often picking the folk with most nominations for the criteria, best with a positive valence)
  • snowball sampling (when first enrollees refer others, NIDA has a protocol for snowball with IDUs)
  • RDS and PDR for CBPR (respondent driven or participant driven sampling)
Retention issues
Many folk have gotten grants, set sample sizes then fallen way short... now NIH is talking about cutting off funding for unmet enrollment goals (Quote from our OBSSR rep: "It's beyond talk")

So plan for your attrition in yr sample plan. (note from another meeting on this point - best way to keep your folk enrolled and in contact is to plan for routine contact points with them, don't leave big gaps, otherwise you lose folk).

Retention tips (from a large homeless study that had amazing retention rates)
  • keep a toll free project number and distribute business cards
  • use incentives, cash & metro cards (cash is best and you should argue for it)
  • monthly check-in phone interviews
  • Refreshments for all gatherings
  • easy access location, near subway hub
  • incentives mailed out quickly
  • letters and holiday cards sent to participants
  • holiday party (the idea is to create a community of folks)
  • interviewers become familiar with participants life and use "rapport talk"
  • use respectful and formal language (not first names unless indicated)
Now we want to talk more, but it's time for our next speaker, so half of us will probably corner her in the hall later.

Hey, thanks to Rupaleem, another participant, she's telling us all about this good sampling website from RWJF with cites for the different types of sampling....

The race is on!

Hey, I haven't forgotten y'all here, it's just that I'm in the mad dash now to try and create a full grant application by Thu eve... so it's pretty much write all the time while folk are talking and while they are discussing, break briefly for food, read endless buckets of literature, sleep a bit, then up early to write more. But I think I'll get a bit on top of it today, so I'll summarize some of our great presentations then. What's my topic? A tiny project trying to find out more about why LGBT youth take up smoking at such high rates. Encouragingly, everyone here seems to like the idea so that's good. But three folk who get their full narrative done by 8 pm Thu can volunteer to get the full group's feedback on it on Fri... and I think there's at least a small hope in, ahem, well it's not impossible that I could get there. (why, oh why didn't I read more on the topic area before?)

(The hotel puts a small container labeled "vanity kit" in the bathroom...I had to wait a day to open it just because the thought of what tiny thing could be a vanity kit was so intriguing. It turned out to be 2 qtips and 3 cottonballs. Hmmm... I know qtips are amazing, one of those things like fedex where you don't need it but then suddenly can't live without it, but I didn't realize they were quite that transformative.... ok, enough digressing, back to work, it's 7:30 am already, we start in 1 hr!)

Monday, July 9, 2007

Mon 2 - Defining Community & Developing a Co-Equal Partnership

Defining Community and Developing a Co-Equal Partnership: Academic and Community Perspectives
Lisa Sanchez-Johnson U of C & Xichel Woods, Greater Humboldt Park Community of Wellness

This is the old home presentation, cause spent many years living in Humboldt Park in Chicago. It's a huge Puerto Rican neighborhood and one of the fewer places in the northern u.s. where you can develop a taste for corn on the cob with mayo, cheese and lime on top, or rice ice cream (remember the rice ice cream has a raisin in the bottom, that's how you tell it apart from the coconut ones). I'll definitely be hitting it later to get my fix.

First thing... see a chapter by Israel in the book Community-Based Participatory Research for Health (2003) where the principles of CBPR are outlined well. Make sure your community understands these principles.

3 stages of CBPR implementation
  1. Identification
  2. Development
  3. Maintenance
OK, I drifted away a bit here, as I started reviewing lit for my paper while listening.

Mon 1 - Research thru a CBPR lens

Morning all, still no luggage and a typically restless night in a fancy hotel (you know, I cannot remember having any restless nights in a tent, hmmm), but hey, we power on.

There are now 28 of us here, all expected to write an NIH CBPR proposal as a result of our training in this institute. There are almost as many faculty as participants, and they've tried to get at least one faculty tuned to each of our research interests.

Welcome by Hank Webber
VP for Community and Govt Affairs at U of Chicago

He's just welcoming us really, but let me try to capture one of his good lines, and let me also warn y'all, all quotes are subject to the wild vagaries of my memory...

"We all believe there is deep knowledge in communities about how to solve social problems. There is also deep knowledge in universities. CBPR is a search for mutuality, achieving both academic rigor and public benefit."

Research Through a CBPR Lens
Sarah Gehlert
Principal Investigator of U of Chicago's Ctr for Interdisciplinary Health Disparities Research, and their project leader on their CBPR efforts.

CBPR is a balance between sometimes oppositional forces: community reality, the faith that findings will translate into real world outcomes, and academic rigor, the faith that findings have high reliability and validity.

"Community ideas about research and sampling may conflict with what researchers consider good science."

A common way CBPR applications fail the reviews is either
  • community partnership are strong, but not written about systematically and scientifically
  • the project is strong scientifically, but fails on demonstrating true co-equal partnerships with the community (what's with blogger's random font changes)
Mini discussion -- what if we are researching communities where the people are to unstable to be co-equal partners, like injection drug users? Folk talked about using community groups as access, community leaders, ex members of target communities and how co-equal partnership is really a goal, not a yes/no, so you're trying to make someone as much a part of the partnership as they are able to be.

Insightful words of one community partner, "They say this is a partnership, they come to our homes and talk to us, but I'm never invited to their [the scientists] home."

Sarah Gehlert's Challenges of CBPR
  1. CBPR should be community-based, not just community-placed. The ideal is when the community originates a project and researchers are brought onboard to help with it.
  2. How to define community.
  3. Achieving co-equal partnerships
  4. Sharing findings and influencing policy and practice (could it harm the community, what and where for publishing, decide in advance how disagreements will be handled - one way someone did this was to establish community co-directors who are frontfolk to negotiate with then disagreements come up).
  5. Recognizing that CBPR principles alone don't dictate good design and methods
  6. Conducting ongoing evaluation
Morning break