How do you organize your classroom library? Do you shelve by genre? Mimic the Dewey Decimal System?
In my double-wide, portable classroom, books are grouped by genre, and then alpha by author. Starting with the first bookcase on the left, the books run: fiction, nonfiction (after the window), short stories, poetry and picture books (near the projection screen) and then the Plugged-in to Nonfiction collection in the short green shelves. Behind my desk, not pictured) are bookshelves of professional books.
Why am I thinking about book organization? I'm thinking about it because I'd like to do it better next year. Do you "hire" classroom helpers? I would like to do that. I'd like to have students read about the workplace, create resumes, apply for jobs, sit for interviews and get hired. A brilliant teacher I had the pleasure of teaching down the hall from more than a decade ago was the best as managing a classroom workforce. I'm still awed by how he did. So in part I'm thinking and rethinking the book organization because I'd like to hire help or staff the books with parent volunteers next year.
I also keep coming back to the books and organization issues because my teacher-mind is worried about test scores. We got our results from the FCAT reading test last week. My students read like wild fire this year--or at least I thought they did. They talked about books. Not just any sort of character name dropping either. The kind of talk I heard is the "accountable talk" Stephanie Harvey and Anne Goudvis discuss. Talk that tells my teacher ear students are really reading. Students self reported the number of books they read. Many said they read more this year than they'd ever read. We tracked started tracking the count--135 students read more than 1,500 books. So what? What matters?
To the state, the FCAT test results matter. A reading test of 54 questions that takes less than 3 hours to administer over a two-day period determines these students' reading lives for next year. Score below a magic number and students lose an elective class in their schedule and are placed in intensive reading classes to boost their skills. For me, for the first time in nearly 2 decades in the classroom, I didn't see the reading payoff in test scores that I usually do. Scores for students tracked in honors classes mirror state averages. Scores for students tracked in regular classes fall short. I have a lot of questions about the test and test results:
- What do the results say about my teaching?
- What will the state think the results say about my teaching when Florida shifts to the "value-added" teacher evaluations?
- Why could the change in test format (from the FCAT, Florida Comprehensive Assessment Test to FCAT 2.0) say about our curriculum and instruction?
- Are we teaching what the state is testing?
- How can 1 test measure a year's worth of growth?
- What do our scores in each strand tell us about curriculum?
|Vocabulary||Reading Application||Literary Analysis: Fiction and Nonfiction||Informational Text and Research Process|
|Number of Points Possible||9||12||11||13|
Here are our average scores. I don't have mine. We get 1 document, pdf, listed students scores in alphabetical order. I did pull my students overall scores and change in developmental scale score (these are the learning gains NCLB accounts), but I have yet to examine my strand data. I had to return the file; pass it on to the next person. From the county averages you can see that we need to look at instruction in reading application. What does that mean? What is tested in that strand? How much of that is figurative language and how much is thinking/reading strategies such as inference and main idea? Those are questions our English department will surely discuss. But how much time do we devote to testing?
I re-read bits of Berlinger and Biddle's The Manufactured Crisis last week. Berlinger talks about his early claims in this article from School Leadership Briefing (You can read the beginning without a subscription). As Krashen says, it's about poverty not test performance. Last week was demoralizing for teachers and students. New pictures of our students and ourselves will emerge as we analyze and re-analyze and look at "the data" next fall. How much time do we spend crunching those 1-shot numbers?
So what will matter to me going into next year? How will my instruction change? How will I lift up the readers in my room?
What matters to me is turning kids into readers. Readers take control of their own learning. Reading is power both in the classroom and more importantly in the world. Why didn't students' test scores show off that power this year?