Embroidery the Hard Way

With tens of thousands of dollars of equipment on hand, one would think that the seemingly simple would be possible.  Oh dear.  Not even close.

For years I’ve been putting binding on blankets and embroidering them with, generally, just a couple words- a name or something.  I wanted to do a bigger project- a quote that ran the whole length of the blanket.  I can’t even count the number of ways this went wrong.

I have access to a Babylock Ellisimo, a Babylock Ellyse, and a Bernina Artista 170.  The Ellisimo, being the biggest and baddest of these, seemed like the logical place to start. So….

  • Babylock doesn’t label their frames.  At all.
    • No size indication.
    • No useful indication of zero.
    • No model name or number you can search.
    • Not even an easy way to ask the sewing machine what size frames it knows about.
    • To add insult to injury, you can measure the frame–but the result is NOT the frame size.
    • Finally, I found out that the sizing is on the clear plastic insert pieces, which are also useful for rough alignment of your workpiece.  However, if you have the machine trace out the pattern perimeter, it doesn’t align with these patterns.  None of the pieces are actually labeled with the part numbers or sizes, but at least you can count the square centimeters on the clear inserts.
  • Poor discoverability all around on the Babylock.
    • Insert the frame, remove the frame, try not to be upset when the moving thing breaks a needle….
    • As noted above, no way to know what size frame is loaded or what frames the machine knows about.
    • If the embroidery file on your flash drive is too large for any frame the machine knows about, it simply DOESN’T DISPLAY ANY INDICATION that the file exists!
      • Please, if there’s something wrong with the file, show some sign of life!
      • If the file is corrupt or a format you don’t understand, say so.
      • If it’s too big, say so.
      • Too many stitches?  You get the idea.
  • No way to index multiple designs together across frame positions, or at least not as far as I could tell.
    • Apparently some machines have some kind of machine vision.  I don’t need anywhere near that level of precision- just let me index off a known point.
  • Stitchcode
  • SewWhat-Pro
    • Some cool features.  Not worth $65.
  • The funny thing is that some of the hard stuff was remarkably easy.
    • The “Letterworks Pro III” software from 20+ years ago worked flawlessly to take an arbitrary TrueType font and turn it into a stitch path.
    • The pattern it generated was pretty clearly a raster, but it does work.
    • Some of the embroidery fonts have rather annoying bugs- like the “LaurenScript” font, which will have big gaps if you blow it up too much.  These don’t show up in Letterworks but are apparent on the Ellisimo display and in SewWhat.

I would much rather publish a HOWTO- but I’m in the quagmire for the time being.  There are days I wonder if a needle and thimble would be the easier thing to do….but then, this project is on the order of 50,000 stitches…..per side…..

PS It seems to me that machining, printing (2D and 3D), pick and place, and PCB manufacture all involve fundamentally the same operations, but they run on totally different stacks.  I wish we could’ve all agreed on HPGL, G-code, PostScript, or SOMETHING back in the day.

Metrics for sunsetting systems

Really, these are similar metrics to what you would use to choose systems- except they’re often on the tail end of a choice made years ago.

I frequently see multiple systems run in parallel that really should be integrated (e.g., why would one have separate time and access cards?).  If you’re going to choose between systems, score them both and compare their relative risks and benefits.

This is a good time to read my previous post about sunk cost bias and risk adjustment.

I propose the following categories and considerations for evaluating systems for sunset:

  • Longevity
    • How long do you expect to need the system?
    • What would you do if you had to replace the system?
      • What’s the horizon on the subsystems? (e.g., if it only runs on SPARC, it’s probably time to look at sunsetting it.)
      • What if the vendor goes out of business?
      • What if there’s some horrific reliability or security problem and you had to take it down?
    • Is there any reason you would have to upgrade or modify the system in the foreseeable future to meet some need?  If so, what’s the cost and associated risk?
  • Data Synergy
    • Does the system tie to your strategic data goals?
    • How easy is it to import, export, back up, and manipulate data stored in the system?
    • Is the system tied to a core competence? (e.g., if your main customer CRM is very siloed, that probably matters a lot more than if your timecard system is).
  • Finance and Risk
    • Operating and maintenance costs.  Don’t forget that periodic downtime has some associated cost, and it’s really not acceptable for any system to be down for, say, 6 hours every night.
    • Patch intervals, speed of patch issuance, difficulty, likelihood of breakage.
    • How does the system score for PCI and other security frameworks?
    • Does the system integrate nicely with both your core systems and your customers’?
    • Does the system support accretive data?

Assessing risk (and pitfalls)

I’ve already mused a bit on project estimation, and I have an upcoming post about how to think about sunsetting systems.  It occurs to me that there are at least two more fundamental questions to address:  what does it mean to talk about the risk in a system, and what biases might we have when making that estimate?

What is risk?

Finance types define risk as the “standard deviation”.  In other words, it’s a measure of things that might not go strictly according to plan, and how far away from your estimate those things might send you.  In other words, if you say that the project estimated cost is $100 with no risk, you’re saying that the cost should be exactly $100.  However, if you have a cost of $100 with a risk of $50 (we’ll naively assume normally distributed), then that implies your project cost could vary a lot from that $100 estimate.

I often see risk discussed by technical people in terms of roadblocks: “it might be really slow when we hook widget X up to database Y.”  “We might not be able to translate records directly from System Q to System W.”  These are fine things to know about- but if you can quantify them in terms of time lost, revenue lost, or in raw dollar terms, that will get you closer to being to put a bottom line on your estimates.

When doing estimates, teams often use a sort of placeholder.  Instead of estimating that a task will cost, say, $1000, we tend to say it’s a “Small” task, and other tasks might be “Medium”, Large”, or “Extra-Large”.  This is fine.  However, I also think it would be useful to score each task for risk.  For example, I think most developers would agree that “fetch a web page” is a relatively low-risk task, while doing bleeding-edge machine vision is riskier.  From here, perhaps you extrapolate that the “retrieve a webpage” task is estimated to cost $800-1200, while a risker task of similar size might be $500-2000.

Bias

We all have biases- there are some tropes in industry, such as “not invented here” syndrome.  Most technical folks, I think, have some bias either toward platforms they’ve already used and understand, or toward the latest “sexy” technology.  I suffer from a variant of this where I prefer open-source products.

There’s another bias inherent to humans: the sunk cost fallacy.  We tend to impart value to things we’ve paid for, even if that past transaction has nothing to do with the current decision.  Sunk cost bias is fairly intuitive if you think about it, but it’s difficult to consciously minimize it when decisionmaking.  The essence: It doesn’t matter if you just spent a million dollars last year.  The question is what the cost is of a new system versus the ongoing cost of the current system.

 

Algorithms that are 80% good aren’t scary.

Leaving aside the difference between sensitivity and specificity: if people know the machine is sometimes wrong, it’s not so bad.  Things get scary when the machine is 99% or 99.9% accurate, and you’re caught on the wrong end of a presumption.

I’ve often wondered, for example, how many people get picked up on warrants for someone else.  I think it’s important that we have systems for review of the totality of the circumstances by a human.

Vexing little bugs

I find that, particularly with geek stuff, I get hung up on tiny little details.  For example, I did a deep dive with Javascript and CSS the other week, trying to find out why I couldn’t get an input field to select all the text inside when I clicked on it.  This ties back to Spolsky’s thoughts on craftsmanship.

But if I channel Godin for just a bit, I’d remember that the whole product is important- not just one tiny detail.  It’s fine to care about little details once the building is built, but while you’re building- keep going.

2018 Law license reciprocity update

It’s been awhile since I blogged about the UBE and Kansas.  I was surprised that Kansas took an extra couple years to join, and even more surprised at the adoption in the northeast.  I had always assumed that jurisdictions such as California, Florida, Texas, and New York would maintain their own licensing regimes- but New York joined the UBE in July 2016.

As always, consult local rules.  NCBEX publishes a very useful guide about bar admissions, but I’m going to supplement it here with some reciprocity rules as I’ve found them on this date.  No guarantee this information is accurate or will be kept updated, folks!

I’ve got more than five years practice in Kansas now, so I’m looking at states in the general area I might consider joining.  Here we go:

StateApplicable Rule(s)Active Practice Requirement
Missouri8.105 of last 10 years
Iowa31.12, 31.13*5 of last 7 years
Nebraska§ 3-119(B)3 of last 5 years
Illinois7053 of last 5 years

*link to rules on a specific date!  Be careful with that one.

On project estimation

This may or may not be a series, but I wanted to dash off a few thoughts.  I have a feeling this post will come across as very stream-of-consciousness but will be clarified by followup posts.

The essence of estimating projects is to evaluate two things:

  • How long is it expected to take?
  • What risks are there?

There are some brilliant writings on the first issue, but I haven’t seen much done on the second.  I’d like to propose the following:

  • Each task, in addition to whatever size rank you want to give it (e.g., Small, Medium, Large, eXtra Large) gets ranked for risk.
    • For example, let’s suppose there is a “grab webpage” task, and every member of the team agrees it is Small.  That implies low variance.
    • Let’s suppose you want to parse the webpage you’ve just grabbed, and you get the following votes as to its size: S, S, L, M, XL.  That implies a large variance, and higher risk.  Note, however, that this level of disparity might also imply that the task isn’t well scoped, that different team members are using different assumptions, or some other definitional or implementation issue.
  • These variances should be accounted for in the budget and as part of burndown.
  • One feature that I haven’t seen in any tool is the ability to do “actual” vs “budgeted” burndown: the comparison between actual time on project versus the budget.  The useful thing about taking risk into account upfront is that you may very well find that while your actual burn exceeds a straight-line estimate, the numbers actually do fall within your predicted risk range (Nate Silver can tell you all about this one).  Of course, you can’t know until you have the tools to do this.  I’m currently exploring abstractions to allow various permutations of Kanban and other project monitoring and tie them against commits, support tickets, and testing.
  • Now, one additional layer to consider is prospective maintenance cost.  Here, parsing a webpage is a “brittle” task- obviously, if someone upstream changes the format of the web page, you may very well have to start this task over from scratch, or it may cause future outages.  This whole “ongoing cost” and/or “brittleness” risk factor is something I’ve rarely seen accounted for in projects.
  • Finally, most groups don’t really estimate to account for interdependencies, especially internal ones, or testing.
    • Interdependencies are a major problem: how many times did the Boeing 787 slip?  Obviously, you can’t ship just the wings of an airplane.  But in software, you can often ship some intermediate product even if waiting on some “important” piece.  In fact, sometimes software is better without that “essential” feature.
    • I’m not necessarily a fan of writing tests before writing code, but I do think that organizations tend to focus on “features first” at the expense of some very brittle systems.  For the love of Pete, at least build a regression suite as you close dev and support tickets.

Controversy and Conversation

For (I think) the first time on this blog, I’ve made a post password-protected.  I don’t really want to make that a regular habit, but the subject matter is sufficiently nuanced and potentially controversial that I wanted to let a few trusted folks review it before I made my position globally visible.

Fundamentally, this issue relates to what Paul Graham raised in his essay “What You Can’t Say“.  The idea of being a neutral Switzerland of ideas is contrasted with Lin-Manuel Miranda’s Hamilton character:

I’d rather be divisive than indecisive.

In fact, I think one could fairly make the point that the whole play is the tension between Aaron Burr (“Talk less; Smile more”) and Hamilton’s exuberant position taking.

I think the real world should be a balance between these things.  I gave a speech about guns on campus last week, and I fully expected that the former faculty and other audience members would be very frustrated with the state of Kansas law.  I was pleasantly surprised at how open and free the dialogue proved, and it was a remarkably civil and informed conversation.  The experience gave me hope that we can still maintain civil society through conversation, facts, and dialogue.  I was flattered when one audience member commented that my talk was “encyclopedic”.

I suppose I’m lucky to be unafraid of public speaking.  When given the chance, I think I’m going to continue holding talks and forums on interest of public issues, even when they are controversial.  In any case, I hope I never lose the humility to be open to changing my own mind, even when I hold what I think is a nuanced and researched position.  But I also think that part of being a good citizen is the ability to hold and debate opinions on controversial subjects, even if I’m not willing to advertise these on my blog.

Protected: On crime statistics

This content is password protected. To view it please enter your password below:

This page left blank

You’ve probably seen something like this in a book at one point or another.  This seemingly contradictory statement is there for a good reason: it’s there to prove that the printing process worked, and that the customer isn’t getting a botched print job.

Books are composed of signatures: one large sheet folded to compose several pages.  Three edges are then lopped off, producing a book bound only on one edge.

Sheets are easy to fold in powers of two: 4, 8, 16, and even 32 page signatures are reasonably common.  Presumably, most writers don’t sit down with the goal of writing a book that will fit exactly.  This is one reason why many books have ads, pages for notes, or simply blank pages at the back (or the front, for that matter).  It’s cheaper for the printer to leave the extra paper in than it is to rip them out, and it gives a nicer finish.