3D Printing at Rutgers

Since I am going to be using 3D printing as part of my research, I’ve been on the lookout for places to print at Rutgers for quite some time. If you’re also interested to do some 3D printing for your research, or you just want to 3D print something for fun, then I have come across a number of options that might be useful for you. I’m sure there might be even more locations available. So, if you happen to know of any other locations that allow for open use of printers, please let me know.

  1. Douglass Library, Fordham Commons area Fablab, Douglass Campus: on the ground level of the library are two MakerBot Replicator 2’s and computers with design software. You can schedule an appointment to print your project and to get pricing estimates.
  2. Rutgers Makerspace, 35 Berrue Circle, Livingston Campus: MakerBot Replicators and other fun items, like a pool table, are available here. The Makerspace normally has regular drop in hours for printing or just hanging out. The space is run by Rick Andersen who has lots of experience in computers and electronics including web design, Arduino and soldering.
  3. Rutgers Mechanical Engineering Dept., Busch Campus: the department has a few options available for Rutgers affiliates to use, including a Stratasys Objet350 Connex and Stratasys uPrint SE. The contact person for setting up an appointment to get your projects printed and for pricing is John Petrowski (petrows@rci.rutgers.edu).
  4. FUBAR Labs, Highland Park, NJ: Fair Use Building and Research (FUBAR) Labs is a nonprofit that provides a local spot for people with common interests, usually in science and technology, to meet and collaborate. It’s an open community offering classes, workshops, study groups, and long term project collaboration. You can join as a member for 24/7 use of the space, or you can drop by for one of their events to check them out.

From the computer screen to the lab bench: A physicist learns to do wet-lab biology

As Kenneth described in a recent post, the Center for Integrated Proteomics Research and the BioMaPS Institute for Quantitative Biology at Rutgers recently held a two-week “boot camp” program to cover a range of basic topics in molecular biology and biophysics.  The program was intended to serve the increasingly diverse community of scientists — with backgrounds ranging from physics and chemistry to computer science and mathematics — working in quantitative biology.

As a physics graduate student who works in the BioMaPS Institute, I was definitely in the target demographic.  In my undergraduate days I was mainly interested in particle physics and cosmology, so my coursework focused entirely on physics and mathematics.  I haven’t taken a biology or chemistry course since high school.  While I’ve certainly picked up a great deal of the necessary biology throughout my graduate research in biophysics, I could still use more breadth.

But I was primarily interested in gaining a very specific kind of breadth from the boot camp.  Besides having a background in physics, I am also a theorist by training, and I’d never had any experience doing “wet-lab” biology experiments.  (In physics, there is a very clear divide between theorists and experimentalists.)  But recently I’ve become interested in gaining some experience with wet-lab biology, both because it’s helpful for collaborating with experimentalists and understanding experimental papers, but also because there is a serious possibility I will pursue a combination of theoretical and experimental work next year as a postdoc.  Luckily, the boot camp included a week-long experimental lab for complete beginners like me, so it seemed like the perfect opportunity to try it out.

The best thing about having zero experience with something is that you can learn a whole lot really quickly.  So even the most basic, mundane aspects of doing the lab were new and exciting for me, things I had heard about in talks or read in papers but never really understood.  So this is how you pipette…and “streak a plate”…and purify proteins…and run a gel…and so on.  Here’s some photographic evidence (credit to Gail Ferstandig Arnold):

As someone who has worked only on the other side of research until now, it has been really eye-opening to have concrete experience doing experiments and generating data that previously existed only as abstractions in my theorist’s mind.  While I recognize that a week’s worth of exposure isn’t enough for me to jump right into doing all my own experiments as a postdoc — undoubtedly I’ll have to relearn all the stuff from last week again later — getting that first experience definitely gives me confidence for the future.

What do we study in Library and Information Science?

Library and Information Science (LIS) owes a considerable portion of its genesis to the concept of the document and to the process of organizing these unwieldy creatures.  The relationship between a document and the concept of information (the all-knowing “I” within “LIS”) is difficult to fully articulate.  Philosophers, such as Mikel Dufrenne have tried to distinguish between aesthetic objects and signifying objects, with signifying objects, first and foremost, responsible for dispensing knowledge, even if they “engage us in an activity” (Buckland, 1997, p. 807).  It seems that this definition could be combined with the semiotic understanding of signs as artificial constructs with the result being that our designation and understanding of a document depends both on a process of social construction (framing the object as a document and arranging it within a context of other objects), and on the whole range of its evidence-bearing properties (text, watermarks, images, etc.) relevant to the mode of inquiry.  What is at stake with this definitional argument is not just the scope of “acceptable” phenomena of study within this field, but by extension, the other human activities that are appropriate areas for LIS-brand inquisition.  For instance, if deer tracks could be considered a document, then it would be quite appropriate to study the information seeking activities and cognitive processes that allow a hunter to track an animal.  I imagine few hunters would be impressed by the results of this study, but given a broad definition of document, our discipline could extrapolate from these results additional insights into human information behavior, generally speaking.  In this sense, our fundamental assumptions about the types of phenomena to be studied helps to determine the possible avenues of research that we might consider as researchers within the field of Library and Information Science.

One of the important lessons to be learned from this discussion is that understanding a discipline’s initial assumptions is critical to understanding what one is, in fact, studying, and why scholars make the decisions that they do when select topics and methodologies.  Indeed, it becomes ever clearer that the LIS field cannot be completely unified, theoretically or methodologically, because a plethora of different types of researchers are coming to the field with very different metatheoretical assumptions.  Considering the field’s multidisciplinary pedigree (ranging from linguistics, to cognitive science, to computer science, to psychology, to the humanities, and on and on), it is not surprising that scholars working in LIS are carrying a diverse array of metatheoretical assumptions.  Thus, within the methods of this field, we should really not be that surprised to see an eclectic mix of quantitative, qualitative and interpretive approaches at work.

Buckland, M. K. (1997). What is a “document”? Journal of the American Society for Information Science, 48(9), 804-809.

Research in Mathematics

Working in mathematics, I’ve found myself often asked the question “What do you do?” Sometimes the expected response is my “elevator pitch” (the short blurb about my area of expertise). But sometimes the question is more basic: “What is it you do, though? Do you just sit all day and think?”

Thinking [cc]

Now, to a large extent, many people in research spend all day thinking. However, mathematics is not simply the art of staring at a problem until the solution materializes in one’s head. (It’s worth a try, but often solutions do not come from epiphany alone.) I would like to discuss a few of the ways in which research is conducted in mathematics, with emphasis on the parallels and similarities that may exist between mathematics and other fields, perhaps to somewhat debunk that notion there may not be any such similarities.

Nature [cc]

Mathematics research revolves around proving new theorems — mathematical statements that can be deduced from the fundamental axioms of mathematics and from preexisting theorems. Generally, though, the procedure is not to make a big pile of the existing statements and to try to string them together randomly until one forms a coherent deduction that results in something meaningful. That would be pretty rough sailing! Mathematics relies on conjectures, put forth as believed to be true and hopefully proven by someone at some later time. While there are conjectures (e.g. Goldbach’s conjecture) which remain unsolved for long periods of time (sometimes resulting in notoriety), most theorems start out as rough ideas or propositions that are developed with increasing structure and refinement until they are proven. In addition to proving new theorems, other steps forward in research include constructing examples of mathematical structures and verifying theorems by re-proving them in new ways. Computational work is also done to improve theorems in the case that a theorem is quantitative (or sometimes, to prove that a quantitative result is best-possible).

El. J. Comb. logo

Know the literature: As in most fields, the mathematical literature is vast, and perhaps especially in mathematics, it is easily accessible. Increasingly, mathematics journals are available online — not just through the library system, but free for instant download on the Web. Having less concern for the preservation of intellectual property, many editorial boards have shifted to such open/free publication. (Indeed, I myself have a publication in The Electronic Journal of Combinatorics, which is precisely such a journal.) There is also the arXiv (the X is pronounced sort of like χ, the Greek chi), which hosts preprints of papers and other works of mathematics (and many other fields).

Being familiar with the body of literature, both seminal papers and other older works as well as the current cutting-edge work (as it appears first, usually on the arXiv), is an important part of conducting research in mathematics. Jacob Fox came to Rutgers in 2009, when he was at Princeton, to speak at a seminar. He noted during his talk the importance of being familiar with the literature, mentioning in particular how his knowledge of a certain publication helped him and his coauthors solve a problem.

Mathematics [cc]

Crafting and Proving Good Conjectures: One of the more important questions is where to start — if we’re going to prove a statement is true, what is that statement? Generating good conjectures is not a matter of guesswork or divine inspiration, at least not entirely (although the former may have helped from time to time, and the latter is open to some debate at least). Increasingly, experimentation is a common way to generate conjectures. It is also often useful to test conjectures in small, typical, or special cases (where “small,” “typical,” and “special” depend on the problem at hand). Usually a conjecture applies to too many cases to test them all (sometimes, infinitely many cases), so this methodology is often used to verify that the conjecture is sometimes true, but not to verify the conjecture exhaustively. (Conversely, experimentation may lead to a disproof of a conjecture by identifying, constructing, or otherwise elucidating a counterexample.) Experimentation may also help unearth components of the proof of the conjecture at hand.

It is also crucial to have a firm understanding of the big picture in the field where these questions are being asked. There is a substantial amount of context and content that guides someone to the right kinds of conjectures and the proofs of those conjectures. Mathematics is a field in which the objects of study are highly structured, and knowing these structures helps eliminate some of the technical clutter that can obfuscate the underlying truths that one wishes to prove and the bits and pieces that go into proving them. Many proof techniques can be adapted to different situations, so in some sense theorems may be proved by matching a generalized proof to a statement you would like to prove specifically.

Proof [cc]

Building Theories and Solving Problems: Tim Gowers is famously credited for roughly dividing mathematicians into the two categories problem-solvers and theory-builders (or rather, he is credited for noting this division in his oft-quoted The Two Cultures of Mathematics). I won’t discuss this dichotomy, but these two activities characterize much of the research done in mathematics. Proving single, unrelated theorems one-by-one is not usually how research goes. Rather, the enterprise involves longer strands of investigation — a dozen theorems sometimes collapses into a single stronger and better statement after enough exploration and refinement. Meanwhile, single ideas branch into many avenues of investigation. But generally, the aim is not to knock down one theorem, then turn around π radians and start over, but to work on larger-scale investigations. I could make a metaphor about bowling pins or dominoes, but I think the idea is clear. An important aspect here is also collaboration, which is a major element of research for many mathematicians. Working on papers is one part of collaboration, but other important activities include seminars & conferences (as participants and as organizers), expository writing, editorial work, and many other collaborative activities.

Structure [cc]

So the venture is to find good lines of research and establish some clear, path along that line. There are the two approaches. The first is to identify important problems and build up theory to solve them. One famous example is Fermat’s last theorem, which conjectured centuries ago and recently proved by Andrew Wiles. During those centuries, large swaths of mathematics were developed in large part as attempts to prove this conjecture (including Schur’s theorem, one of my personal favorites). This “problem solver” work weaves what might be the leading strands of the theory, loose and rough but pushing outwards farther than neighboring strands. Such work often moves mathematics in innovative or interdisciplinary directions, building bridges between fields of mathematics, and may also connect with work in applied mathematics. The “theory builders” weave strength and cohesion into the fabric (to extend the metaphor). To this end, they focus their research on developing and enriching the theory. They may work to classify all types of a particular structure, for example. Such work includes that of several Rutgers faculty in classifying the finite simple groups. This theory-building reinforces others’ work as they develop and solve conjectures, as it makes the underlying theory more robust.

Images used in this entry are used under fair-use and/or under licensing guidelines set forth by the copyright holder that allow use in this blog, as presented for educational or critical commentary. Images are copyright their respective holders and credit or source is indicated in each caption or in the text of this entry, as applicable. Thanks to Yusra Naqvi for her helpful comments and suggestions.

Finding the needle in a haystack

Research methodology in the sciences can either make you jump up and down like you’ve won the lottery, or sit and cry (you already have enough of a headache, so banging your head on a table isn’t an option). There are those methods that are fail-safe and easy—the ones you don’t mind doing because you know they can’t go wrong. However, there are those methods that can be anything but right. The ones that make you cringe when you see the results, or become flabbergasted because you just don’t know what went wrong.

Sometimes it’s user error: maybe you added too much of this, didn’t add that, grabbed the wrong thing, pushed the incorrect button, broke something (it happens), the list goes on. Other times, it’s instrumentation: maybe it’s not been calibrated in a while or a sensor is malfunctioning. And at other times, it could simply be your sample. Either way, when things don’t go as you expect it, it becomes a game of cat-and-mouse, finding a needle in a haystack, whatever you have it. You have to hunt for what could be the root of this problem, this pure speck of evil that is getting in the way of you and your research.

However, let’s not overlook the beauty that research methodology has provided us. Yes, most of us complain about how tedious some of the work is, how long of an incubation time we have, how many problems we have with instrumentation. But let’s go back in time to before these techniques were discovered, before instruments were created. We would not be able to complete a fraction of all that science (as well as other fields) has offered. We would not be able to run DNA samples on a gel using electrophoresis in one hour, or extract RNA in half a day. We would not be able to perform all our animal studies, or measure blood samples. So while you’re banging your head (figuratively) on your desk, buried in frustration, think about all the good things research methodologies gives us—without it, you’d only be able to complete a fraction of all that you will accomplish during your time here at Rutgers.

Randomly Walking through Research

From reading papers, it’s easy to gain the following picture of what the research process looks like: someone starts at point A, a known point in the space of knowledge, then directly proceeds through various arguments and data to one’s conclusion at the previously unknown point B.  However, thinking that research actually works this way based on what you see in a paper is like thinking that Michael Jordan just awoke one day and suddenly starting dunking from the free throw line.

No, MJ almost certainly traveled a long road to get that much air.  The same goes for research.  The real research process more resembles the famous physics concept of a “random walk” (or more colorfully, the “drunkard’s walk”).  In a random walk, some process is imagined as an object, perhaps an inebriated human, taking a step in a random direction at regular time intervals [1].  This idea is used to model everything from chemical reactions to stock markets.

The random walk provides an interesting visualization of the research process as well.  Uri Alon, a scientist at the Weizmann Institute in Israel (whose outstanding set of resources for “Nurturing Scientists” will be a topic for future posts), has described the process as the following [2].  You indeed start out at A, headed for B.  (See figure below.)  But instead of a nice straight route, you embark on an irregular trajectory with many detours, barriers, and delays.  Often you are eventually forced to abandon B altogether: B was already discovered by some Russian guys in the 1970s, or maybe it’s impossible, or perhaps it just can’t be reached if you hope to graduate within the current decade.Image

At this point your random walk enters a limbo state that Uri Alon calls “the cloud”: you know you can’t go to B anymore, but you don’t know where else to go.  Being stuck in the cloud is probably the most difficult part of doing research.  But the key is recognizing this is a natural and inevitable part of the process.  If you persevere, you will leave the cloud by eventually finding a new place to go: point C.  In fact, often C is much more interesting than B would have been anyway — the unexpected almost always is.  Of course, sometimes C also fails to work out, too, in which case you redirect to D, E, F, etc.  (Hopefully you don’t run out of letters!)  The point is that research is less like a direct path from A to B and more like a random walk with an unknown trajectory and an unknown destination.  But after all, it is this journey into the unknown that makes research so exciting and so important.

[1]  Mlodinow L.  (2008)  The Drunkard’s Walk: How Randomness Rules Our Lives.  Pantheon, New York.
[2]  Alon U.  (2009)  “How To Choose a Good Scientific Problem.”  Molecular Cell 35: 726-728

Research Methodologies in Laboratory Sciences- The Joys of Analytical Instrumentation

Obtaining a graduate degree would be so much easier if the analytical instrumentation would just work…For those of you would don’t have to run various chromatography instruments (ICs, HPLCs, GCs), thermo-cyclers, spectrophotometers, or any of the other numerous finicky pieces of laboratory equipment, I envy you.  You haven’t had to start your day thinking you would be able to run 100+ samples and get another figure for your thesis, only to spend not just a day but a whole week troubleshooting a mysterious problem, eventually determining you’ll have to order a part that will be delivered in three more weeks just to determine the concentration of your chemical of interest.  This of course holds up all the other experiments you had planned to set up.  I welcome you all to the joys of basic wet science research.

When I find myself in these situations I take a deep breath and think of all the reading I’ll be able to get done while I wait.  In my experience these situations usually arise from a few common problems and are a major part of the experimental process.  First, make sure you really read the instrument manual before you attempt to use anything or try to fix it.  Many times an instrument isn’t working because someone else, who had no idea what they were doing, decided to make a “repair.”  This is one reason it is important for senior members of the lab to instruct the new lab members on proper usage.  Secondly, remember to perform routine maintenance, as neglected instruments are like high maintenance boyfriends and girlfriends.  They will not work solely out of spite if ignored for too long.  Instruments work best when used and maintained on a regular basis. Third, always remember that this is part of the “learning” process.  You never really understand how something works until you have taken it apart and put it back together a million times.   Now not only are you an expert on the instrument, but you can also understand and interpret your data better since you know the limitations of the measurement. Your advisor and other graduate students will agree that this is a large part of the experimental process.

Lastly, if all else fails, blame an undergraduate and take a long weekend or a mental health day.  Delays are only to be expected when relying on group used equipment and if you are lucky someone else will have fixed it by the time you get back.  Plus working this hard makes obtaining the data that much sweeter.  So the next time an instrument, computer, or your “favorite” piece of equipment gives you a strange error message remember that you are not alone and that this is all part of the process.