DCPL Study Rooms: A Study: Palisades

One of my friends is a librarian at Palisades, and she asked me not to publish this review, because the study rooms at Palisades are already in high demand. There is a reason for this: they are FANTASTIC.

Good things? The lighting is fantastic, the rooms are spacious, the tables have electric outlets built in, and the window in the wall is both large enough to make the room feel open and small enough that you don’t feel like you’re in a fishbowl. The library itself is light and airy with plenty of work tables and comfortable chairs, there is a parking lot, and the library is across the street from a nice-looking coffee shop that I didn’t go to because I’m pinching pennies. Also, there is some lovely posh-suburb-scenery on the way there.

Less good things? There is very little public transit serving Palisades. There are only three study rooms. And it is a shlep to get there from my home.

Overall? I don’t recommend going out of your way to Palisades, but if you have another reason to be in that part of the District anyway, it’s a great library for work.

DCPL Study Rooms: A Study: West End

My first study room experience was a few years ago, at the West End library. Bottom line up front: these are not my favorite study rooms.

Let’s start with the good things: there are five study rooms at West End, so there is less scarcity-induced demand. West End is pretty convenient for my life. (It’s super close to a Trader Joe’s! Must buys: the pasta sauce with the red label, sour scandinavian swimmers, olive fougasse, arugula.) The study rooms are not fancy, but they are comfortable with adequate space and easy electrical outlet access. The library is adjacent to an (admittedly expensive) coffee shop. The study rooms lock automatically, so if you leave to use the restroom, your belongings will be safe. And if you’re not using a study room, there is good table space in the rest of the branch.

Now, why are these not my favorite? Primarily because they are not at all soundproof. While I was working that first time I used the study rooms, the person in the room next to me was on a Zoom with the Office of the Tenant Advocate. I know this because I could hear almost everything that was said. The study rooms lock automatically, so if you leave to use the restroom, you need to wait until a librarian is not engaged in helping another patron to get back in. (Helpful librarians are good! Needing to wait when you don’t know where the librarian is is not good!)

Have you used the study rooms at West End? Do you agree with this review? Share your thoughts!

DCPL Study Rooms: A Study: Chevy Chase

Important: There is a Chevy Chase library in DC, and there is a Chevy Chase library in Maryland. Both are on Connecticut Avenue. They are miles apart, and it isn’t an issue if you use your brain in conjunction with your map app of choice, but you should be aware of this if you, say, use your map app to figure out how far the library is from a second location on Connecticut Avenue. I’m not saying that I set my map app for the Chevy Chase library in Maryland, but I’m not not saying that either.

The Chevy Chase (DC) library is teeny tiny and does not have study rooms.

Review done.

Chevy Chase does have some tables for doing work, with outlets. There are also chairs looking out the window. And there is a table with a jigsaw puzzle. I did not use the restroom while there, so I can’t comment on it. There is a pretty substantial parking lot which has signs stating that it’s for the community center; I don’t know if parking for the library is allowed. There are bus routes, of course, and street parking was easy.

Overall: Chevy Chase wouldn’t be my library of choice unless it was my neighborhood library, but it was a completely acceptable work location for a free hour before another meeting.

DCPL Study Rooms: A Study: The Introduction

Some home internet connectivity problems sent me to the library to do work. Some work sent me to the library to do work.

DC Public Library has 26 neighborhood libraries plus the central library, and almost all of them have study rooms. To date, I have used study rooms at seven branches, plus I have explored but not yet used the study rooms at an eighth branch, and have a study room reserved at a ninth branch later this week.

In keeping with my DC nerdity, I am going to try to use the study rooms at every branch. Stay tuned for the results of this study!

New series: Quilt Blocks in the Wild

Inspired my current foray into quilting (my hobby is usually “buying materials for craft projects that I won’t actually finish” but I’ve managed to make pretty decent progress on a few quilts over the past few months), I have been seeing quilt patterns while out and about.

Today: this beautiful wall at the new Bakery ThreeFifty location, re-opened just last week much to the relief of the entire neighborhood–or so it seems.

And a corresponding quilt.

(Generative) AI, Pro Bono, Access to Justice, and Equity

Yesterday I went to a fantastic program organized by the Washington Council of Lawyers on “Best Practices in Pro Bono: The Role of Artificial Intelligence in Pro Bono and Access to Justice.” I am writing this post (hello, blog, my long-lost friend!) in part to capture my notes to the program for sharing with a few people who asked me to do so, in part to add my own reflections on what I learned, and in part to incorporate some additional resources. (And in part to get my writing muscles warmed up for a large writing project I need to work on this weekend!)

You will note that the title of this post includes “and equity” although that wasn’t part of the name of the program. The program did include discussion of equity, but I think it’s important to state explicitly that equity is part of our values as we pursue access to justice. I will share here a relevant anecdote from the mandatory ethics day that preceded my admission to the bar in Maryland. The speaker was talking about the importance of pro bono work. Why is it important?, he asked. His response: to improve the reputation of lawyers in the community. Are you side-eying or jaw-dropping? Me too. That attorney is why we can’t just assume that a lawyer doing work pro bono cares about equity.

An effective sales pitch?

One of the speakers at yesterday’s program is the Global Pro Bono Manager & Digital Strategist at Microsoft, and he did a fantastic job of showing how Microsoft’s Copilot can be used both for low risk activities (think: putting together materials for a conference presentation) and higher risk activities like legal research. His role on the panel was not intended as a sales pitch, but I came close to being convinced that I have been being too negative about GenAI. Note that I only came close to being convinced! There are some strong negatives, one of which was never mentioned, and that is the environmental impact of AI. I’ll get back to that in a moment.

“Low-risk” uses for GenAI

The executive director of CAIR Coalition was one of the panelists. He shared that CAIR has an AI policy that I wrote down as “zero use of GenAI for any work product that would go in front of an adjudicator but feel free to use for things that will make work easier like agendas and slideshows.” Any errors in paraphrasing are mine alone. In other words, low-risk activities are fine; high risk activities are not. Also: if it is work that only a lawyer is allowed to do (ah, unauthorized practice of law, one of my topics-I-can-talk-about-without-preparation), don’t use GenAI, but if there’s no practice of law, go forth and use it.

Other low-risk activities: using reliable data to create graphs and charts, using a conference program to generate an invitation.

Speakers spoke about using GenAI to write first drafts that they can then edit to make them their own work. I have concerns about this use case. 1. I question the intellectual integrity of this activity. 2. I worry about unintentional inclusion of bias resulting from the systemic inequities built into the large language models used by the major AI products. 3. Each prompt has a significant environmental impact, and the effects of climate change have a disparate impact on lower income communities.

Legal ethics

There are multiple concerns about legal ethics when we talk about GenAI. There are twoissues in particular (at least that come to mind right away): attorney-client privilege and competent representation. Related, but not technically a legal ethics issue, is a concern that self-represented litigants will unknowingly compromise their own privacy by including personal information in GenAI prompts.

Attorney-client privilege

Attorney-client privilege is one of the almost-universally-known and understood privileges, so it should be obvious that entering client information into a GenAI prompt is at best highly risky from an ethical perspective. The law firm Crowell & Moring uses a proprietary AI platform, and has strict rules around entering client information, even though their platform is theoretically contained. If even enterprise versions of GenAI platforms with protection as part of the usage agreements are high risk with respect to attorney-client privilege, non-protected platforms are definitely over the line. Members of the public using freely available GenAI are not violating attorney-client privilege when they enter their own information into a prompt, but may be adding private data into the LLM used by the AI platform.

Competent Representation

Microsoft’s GenAI platform Copilot was given this name in part as a reminder that GenAI is not itself the pilot. Use of GenAI requires human judgment to review the AI output. We’ve all heard about the lawyers sanctioned in New York for filing briefs citing nonexistent cases. This news served as an important reminder to attorneys to “trust but verify” instead of just “trust.” Failure to follow up with the verify step is would be a failure to competently represent your client.

In DC, Ethics Opinion 388 lays out parameters for ethical use of GenAI. Overarching this opinion is this discussion of the relationship of competence to technology use:

Competence includes understanding enough about any technology the lawyer uses in legal practice to be reasonably confident that the technology will advance the client’s interests in the representation. Separately, the lawyer should also be reasonably confident that use of and reliance on the technology will not be inconsistent with any of the lawyer’s other obligations under the Rules of Professional Conduct.

BigLaw vs Legal Services

Some large corporations–according to what I heard at yesterday’s program–are surveying their outside counsel to ask how those firms are using AI. As the speaker emphasized, HOW those firms are using AI, not IF they are using AI. There is an expectation that their outside counsel are using AI. (Note that I’ve referred here to AI writ large, not GenAI specifically.) As the speaker shared, “if AI is good enough for corporate clients, why is it not good enough for low income clients?”

Cost and ATJ

“GenAI could be a game changer for access to justice” (Jim Sandman), but one of the huge barriers to making it so is cost. Corporate clients can expect their outside counsel to use AI because those firms have the resources to study, learn about, and review the technology and specific products, to offer training to attorneys, and to pay for enterprise versions of the software. In other words, there are more costs to using GenAI than just software licenses…though software licenses can be substantial as well.

One of the ways that Microsoft uses its own GenAI platform for access to justice is part of their work with Dreamers. I did not take notes specifically on their program, so forgive the vagueness here, but they used Copilot Studio to create a tool that helps Dreamers complete paperwork related to their status. It was super cool sounding (see “effective sales pitch?”) but Copilot Studio costs $200/month with a limit on messages. (To be fair, the limit on messages seems reasonably high, at 25,000.) In one way of looking at this, that’s only (“only”) $2400 per year. For a large legal services organization, that might be negligible. The Legal Aid Society of DC, for example, has an $11M budget and 114 employees (per their 2022 form 990). For a smaller organization, however, like Christian Legal Aid of DC, with 5 employees and a budget of approximately $300K, $2400 looks like a substantial expense. Add to this the Microsoft Copilot licenses at $30 per user per month that I think are necessary to really get effective value out of Copilot Studio, which are on top of an organization’s Microsoft 365 licenses, and you see the expense grow. (This assumes that the organization is already using Microsoft 365; at JusticeAccess we use Google Workspace, which can integrate with Gemini and Gemini Code Assist, the latter being–I think–the Google analogue to Copilot Studio.)

Retrieval Augmented Generation

Yesterday was a big day for discussion of Retrieval Augmented Generation, or RAG. The idea behind RAG is that the LLM it uses consists only of vetted materials. In late-ish 2023, I attended an actual sales pitch (in contrast with the Microsoft speaker’s incidental sales pitch) from one of the major legal research vendors, in which they averred that because their AI product uses RAG, it would be free of hallucinations. Research out yesterday counters that assertion.

Researchers from Stanford’s Regulation, Evaluation, and Governance Lab tested GenAI products offered by Lexis and Westlaw using RAG in comparison with GPT-4 and found that the RAG AI was better than GPT-4…but not great. You can read the press release/executive summary, or if you have time, you can read the preprint of the entire paper. Disclaimer: I have NOT read the paper myself. (Yet?)

Greg Lambert, a law librarian at a firm in Texas and blogger and podcaster about tech issues in law, apparently HAS read the paper, and he shared one observation about it on LinkedIn, specifically, that the study was based on the wrong AI product offered by Westlaw. Oops? His post generated important discussion–it’s definitely worth reading–and also includes a follow-up indicating that this was based on an access issue disclosed in the paper but that the access issue is being resolved so that the researchers can complete an apples-to-apples comparison.

Why does RAG still hallucinate? If I understood correctly the Microsoft speaker, it is because the language model used in RAG consists of more than just the vetted sources. (For criticism of my implication that all RAG works the same way, see this evaluation of RAG from the perspective of an academic librarian, in which we learn that there are actually “dozens or hundreds of different variants of RAG.”) Apparently, it is possible to “make your own Copilot”…and the way it works is that the vetted sources that you assign to the model act as a layer on top of the LLM, rather than comprising the entire model. (Contrast this to Google’s Programmable Search Engine, which searches only the sites that you tell it to include. Obviously this is not parallel to RAG AI since it isn’t AI at all, but the contrast is in the datasets rather than the use of the datasets.)

So…?

I left the program yesterday more confident about the possibilities of GenAI for access to justice, no less cynical about the possibilities of GenAI exacerbating the distance between the haves and have-nots, and disappointed that there was no discussion of balancing competing values like the environmental impact of AI. There was informal discussion following the program about having a follow-up session, so I think we’ll see what thoughts come next!