SyntaxHighlighter

Friday, November 11, 2016

Everyone Has Good Ideas

Everyone has good ideas. At least someone thought enough to give the idea. And while some ideas are really not that good in the end, you do not know until you hear them. When building a software product, ideas come from everywhere.

The business-side will have some ideas and you know they will be good ones, because they are paying for everything! Amazing how ideas sound a little different from the person (with)holding the checkbook. Developers will have good ideas because they are creative and just finished reading medium.com, and are eager to try new framework/library/database because surely it will be amazing and solve all the things. Other people in the company, or even individual clients, will have ideas now and again, sometimes in a very niche part of the software. And those ideas are important to them, more important than your next major or minor release!

What to do?!! Blowing people off and just blazing ahead with your plan might turn out well, and you'll prove to everyone how right you were! But at best some people are not happy, with you or your processes. At worst, you stop receiving new ideas at all. (Let's not even consider ignoring others ideas and still sinking . . . uh oh for you!)

When it comes to receiving ideas, two things need to happen:

People need to feel heard
People need feedback

Feeling Heard

Like I mentioned before, any idea someone brings up is important to them. If they do not feel heard or valued when giving their idea, they might become alienated to the team or just bitter. Could happen differently for different people, but the basic principle is to treat people well! We want good ideas, and we want people to give them.

Feeling heard is just a quick discussion, and it is also helpful in clarifying the idea.

  • "What were you thinking?" 
  • "Why is this important?" 
  • "Any timeframe the idea?" 
  • Maybe there's a quick drawing on a whiteboard. 

If you can repeat back the idea after you understand it, that person will feel valued.

All this will go into your backlog of choice, whether its a spreadsheet, sticky notes in your dev's cube, or some tool that team uses. Ready to evaluate and enter planning (or cold storage : )

If you hear people's ideas and they have no way to get any feedback, their ideas just go into a blackhole and they have no clue if the idea is something to prepare for. We contracted on a team once where the Tech Lead wanted the business team to leave the dev team alone, constructed a wall, and ideas were tossed over (through him). Without any feedback from their meetings, the sales team were pitching one set of ideas to clients, the dev team was building another set of ideas, and everyone only found out how far apart they were at Milestone Releases! Contrast that with a company I've spoken with that has great intra-team communications, and ideas are flowing!


Giving Feedback

Often called closing-the-loop, any idea needs a resolution.

  • Is it happening this sprint? 
  • Is it up for future review? 
  • "Yes, though wait until Milestone 2 is over." 
  • Skip it, because...


It does not matter whether this feedback is personally communicated or can be looked up at another time. It does matter why the resolution turned out the way it did.

People need to know what was good about the idea, and what was not. Maybe the idea was good, just not right now. Maybe the impact would not be worth the investment. Whatever the reason, it communicates the goals and schedules of the team, and will help shape better ideas in the future.

If people's ideas get feedback, but they are not heard, the people do not feel valued. Ever hear things like this?

  • "Why bother giving ideas, they just get rejected?" 
  • "Those tech guys don't know anything, this idea is necessary and will save the company!" 
  • "Yeah, I saw the demo. That was my idea, ya know?"



People that feel valued will contribute more, will stay with the company, will think of real ways to improve because they genuinely care. Good people are the greatest asset, and must be valued (of course, a principle in life and not just business.)

Now Do It


The process your team implements can be adjusted, but both parts are necessary.

The existing meetings (Planning, Retros, "weekly team", etc) you already have are a good place to touch on these things. Keep using whatever tool you're using to manage tickets. With an attentive team lead (scrum master, whatever), even "being heard" can be accomplished remotely through the ticketing system; though starting that way may not have the right affect until people can trust the process. Even give rewards for good ideas. Whatever it takes for your specific team.

Everyone has good ideas, so work hard to find and keep the good ones, and ensure the next ones are brought in.






Thursday, September 15, 2016

The Overly Technical Interview

Recently I started the interview process with a tech company that worked on some pretty cool software. That was their business, they recently went public, and were growing quickly. I applied as a Technical Lead (forget the actual title) and my background was a fit to get past the screening interview.

The Test


First step past the screening interview was a timed test at HackerRank. 75 minutes to answer 15 multiple choice and 3 programming problems. The multiple choice were a range of topics:

  • What CSS style would be applied to this element?
  • What is the correct SQL query for generating this table output?
  • What would this code output on the console?
  • Where is the programming mistake in this (pseudo) code?
  • Which algorithm is more efficient?


A great range of questions and topics. The only bad thing about multiple choice is second guessing yourself, like knowing the answer in 5 seconds and then talking yourself out of it for two minutes : )

The programming problems were pretty cool, and the testing tool allowed you to write and run in a large variety of languages. It even unit tested for you! My first question I got to a valid place to test and passed 2 of 5 unit tests, but didn't get to see the inputs or expected outputs of the tests. So I rewrote it a different way, and this time got 3 of 5 of the unit tests. And by this point time was running out, so I moved on.

In the end I ran out of time before completing every question. I did not see immediate feedback on my test (from the tool or company's recruiter) and was not sure how this would be scored. Was I intended to finish? Was this some kind of stress test? Do they only take those who can finish?

The One-On-One


I guess my score was enough to move on and I had a one-on-one with another engineer. We talked about past projects of mine, and then did a coding exercise together, live on codeshare.io. The problem was to write Fibonacci in any language I wanted. I just started coding, though it did not feel natural because I haven't used the Fibonacci sequence since I learned about recursion in CS 101. I took a break from coding to explain where my head was at and explained a looping algorithm that would solve the problem by keeping track of an index and two variables. My interviewer showed a recursive sequence and asked it's pros and cons (algorithm complexity, stack size, etc) from the version I described. I asked about his views on the company and we were out of time.

The End


The next day I got the "thanks but no thanks" response from my recruiter, which of course bubbles up the feelings of not getting picked. Everyone was nice and professional, and we're off our separate ways, wishing each other the best.

Takeaways


This interview did not lead to an offer, which is the most telling thing that can come from an interview process. In my view, something did not go right because I thought an offer was possible. It is why I applied!

Two takeaways from this experience. One for me, one for them.

Takeaway One: Study

For those tech companies that are asking the algorithm questions, and I've heard about it from a lot of the big names, one needs to study. No matter if you think the questions are "too basic" or "just facts you can look up on-line", this is the screening process. If you want to stand a chance with these types of companies, study! Know the tricks. Eat Fibonacci for breakfast, B-Trees for lunch and some Big O notation for dinner. Practice at HackerRank or some other coding problem site.

Also, I would have used algorithm design comments more than actual code, especially when I was going to run out of time in the test, or during the coding interview where code would not run. Partial credit in those situations would hopefully have shown I knew what needed to be done ("solved the problem") and just needed time to code it correctly.

In the end, this is the company's interview process and if I want to compete in that space, I need to hit the books!

Takeaway Two: Interview for the Position

I mentioned this was for a Technical Lead position, yet feel that every multiple choice question and especially my one-on-one problem solving interview, could have been solved by looking up StackOverflow or Wikipedia. Not a single question about managing a project timeline, following a software process, thoughts on testing, system design, choosing a framework or component architecture. Maybe there would have been more of that past this "screening technical interview", but if I were a team lead, one would think those items would be important.

Same goes for the timed test and code-sharing, to a degree. Was that an accurate measure of my technical knowledge? No time for design, because the test clock was running and we had 9 minutes left on our interview when we hit codeshare.io. HackerRank is a cool tool, but is coding without (more than one) test inputs or expected outputs very common? Speed coding might be fitting at a hack-a-thon, but "just start typing" is not a typical way to get the best results at the office. The concern is while these tests/reviews might be good for screening, is it relevant to your actual work?

Other interviews I have participated in required a homework assignment with a relevant problem to what the position was for. Time was not stressed. Code was not forced/constrained by a tool. We could discuss the work result, and how I got there. To me, though it is more time invested, feels more relevant to the job at hand. Perhaps that is seen more with smaller companies, but the big ones that are hiring 2-20 people a month, let alone screening 5x-10x that, just don't have the bandwidth for more. Though I wonder if turnover has any correlation to the interview process chosen.

Interviewing is its own set of skills, and I'd better stay sharp : )

Fast and Slow Questions

I like going to art galleries and am genuinely impressed by at the skill and creativity of the artists showing their works. I'm typically drawn to the more technical works that, in my mind, highlight ability over expression. Photorealistic pieces are my favorite. Sometimes I like the abstract or minimalist, but that's just less often. Everyone has different tastes, because we are all a little different, and that's great!

The one thing about art galleries is that it does not take me more than five seconds to know if I like a piece or not. I do not need to, or care to, stand and ponder. When it comes to engineering projects, there are fast questions and there are slow questions. Certain times or certain clients might require one over another, and the questions you are asking will drive the outcome.

Slow Questions


A typical slow question might be "What do you want?" or "What do you need?" Generally this could be called brainstorming, and whether that session takes an hour, or even more sessions, the question is slow. The feedback takes times to think about. The big picture is not fully formed. Everyone has different ideas and goals in mind.

Sometimes slow questions are needed. Requirements need to be generated. Technologies and processes need to be evaluated. Requirements need to be updated. 3rd Parties need consulted or included. Maybe it becomes painfully slow! Yet by the end, you have a well designed system and everyone is on the same page with the outcome expectations.

Fast Questions


Fast questions have fast answers. "Yes or No?" "What don't you like?" The fast question will demand fast feedback, and is often iterative. This is not a single question and then everything is done, but one question leads to the next iteration and the next fast question, yet things keep moving and people remain engaged.

Experience plays a huge factor in being able to ask a fast question. Knowing what is out there, and how to apply tools or technologies. Knowing what has worked and what has not. Proof-of-concepts with a demo project. This type of experience allows for questions that can be answered and acted upon quickly.

What Questions Are You Asking?


If you work with the DoD or for a large enterprise, often you'll be asked (or asking) slow questions. A lot of people, risk mitigation and contract preparation needs to be set up prior to the work. It can feel slow, yet often is necessary.

Smaller teams and those trying to get an edge (first to market or finding new business) cannot afford the slow questions. Either a client is not willing to pay for the perceived delay or they do not have the answers to your slow question anyway ("What do you need?"). Having a prototype could be key for faster situations, because it allows for a fast response. "Is this what you meant?" "What changes before we go live?" These are fast questions and can accelerate development and interest.

The types of questions we ask are the types of answers we will receive. At the art gallery, I'll still ask the fast ones.

Tuesday, August 30, 2016

The Rush, in Retrospect

Preparing for the meeting did not go well. Live, on-site, with client and end customer, and it could have been better.

Crud.

The actual demo was a success, but getting there was not. What happened was not fun. It was not fun for the client, whose expectations were not met. Not fun for our bosses, who dealt with the brunt of the fallout and had reputations on the line. Not fun for the dev team, who worked hard to get the demo even to the state it was in, plus felt the guilt for a less-than-stellar technology rollout.

Now that the smoke has cleared, the yelling and finger-waving has stopped, and blame has been passed around, we can take a look at what happened. Two questions need to be asked:

How did we get here?
How do we keep from happening again?

Getting here was the easy part. Because everything is fine, before it is not! Looking with hindsight, there were a few things that changed silently. Dates got pushed up a few weeks. Demo changed from a tech team demo to a customer meeting. Meeting turned from a demo into a pre-launch. Feature or two changed from the plan. That little technical hurdle that takes longer than planned.

Thinking everything was on track was easy, but as changes to expectations started trickling in from different places, they were not all in sync when the meeting rolled around. And that is how we got here.

The thing is, by then it was too late. There was not time for fallback plan. Strength of will or more management oversight cannot make anything happen faster, or even just fast enough. The yelling was endured, the plan was laid out, the weekend was worked and with a final push, the launch was successful.

Hooray?

The result of a successful launch is good, but getting there smoothly is a better goal. Since this was not smooth, how can we keep it from happening again? Ultimately it falls on us. The client will always be the client, so they may miss art delivery deadlines by weeks, set meetings without our input, and always ask more (for less). Might be annoying at best and infuriating at worst, but clients will be clients!

The point of process is for situations like these, when expectations are key. There were no defined requirements or checklists of what was needed at what date, just that "it should work and be awesome." Somewhere our software process broke down, which led to bad expectations. Running with a broken process is overly misleading too, so by not assessing things early we did not address issues soon enough. At each change in dates, we needed to have been very clear about what features were going to be ready, not a mix of "everything in the contract, just less, with different dates."

The other thing that would help things run smoother is a defined contract style. In broad terms, the types of contracts we have accepted are fixed price and retainer. Quick recap of each:

Fixed Price

  • Defined set of features to deliver.
  • Defined delivery dates.
  • Set price


Retainer

  • No specific features, but team is available to work
  • Dates are defined for retaining purposes ("start of each month")
  • Price is "per time" (hours worked, or monthly)


Fixed price is good when someone knows what they want, and requires a lot more up front planning. People always want fixed price (and want it to be lower!), but often do not take the time to define features. Good for fixed deadlines (trade show demo, bottom-line price for release), and often has some contention toward the end like "that feature was expected" "it obviously should work this way".

Retainer is a good model when people know they need work done, but cannot define what that is, or have a tendency to change their minds, or need to see it to know if it was "right." Usually things are smooth in the start and middle of a contract, and the problems come up at the end of a retainer, where people always question the effort without knowing how long it should have taken in the first place.

Back to the post at hand, we mixed the two. We were signed up for a fixed price with fixed deadlines, then changed the deadlines and were working like a retainer model. Most any change that was requested was started to be incorporated, which was OK until some people were thinking the original deadlines ("no problem, plenty of time!") and others were thinking the (new) demo dates. Now doing retainer-style work was not meeting specific and defined constraints, so while everyone was "busy", it was not focused on the important things.

The contract is now basically concluded, with time to spare! Learning from ourselves this time, we'll likely improve for the next. 


Thursday, August 4, 2016

Prototype Rocks Are Better

A common analogy with my bosses is "Bring me a rock." This fictional conversation between an "idea person" and "engineer" would go something like:

Idea Person: "Bring me a rock."
Engineer: "Here is a rock."
IP: "It is not dark enough."
E: "Here is another rock."
IP: "Why is it so jagged?"
E: "You want a darker, smooth-ish rock?"
IP: "Yes, obviously."
E: "Here is a rock."
IP: "...it's too small."

We are not geologists, and this conversation is not really about rocks.

Ideas are not bad. Quite the opposite, they are necessary! People who come up with ideas are vital from an entire organization down to a small development team. New products or features were not written down long ago and rehashed; they were new ideas at some point and through many steps, and sometimes many iterations, became reality!

Ideas are hard to pin down, and sometimes they are fluid. When a person has a great idea, there are steps involved to make that idea real. Many steps. Rambling off a few sentences of the idea and thinking it will be done to any vague expectation of success is hard to achieve. Especially to an engineer who needs (lots of) details.

And yet, this happens all the time. Which is normal in a lot of cases, and idea people and engineers work together to make it happen. It's called "design."

This  situation becomes a problem when the idea person and the engineer are not working together. Suddenly the idea person wonders why their simple idea is not done yet, and the engineer is frustrated at wasting time on undefined "requirements." Two or three iterations of "Bring me a rock" can wear someone out! Compound this with an idea person that changes their idea before the engineer shows them anything, already invalidating some of that engineer's work! Aaaaarrrgh!

That why you need a prototype.

Having a prototype type is the place to really start a conversion with those idea-person-types. It is tangible. It is something to talk to and point at. The dev team needs to get that prototype ready as fast as they are able (keeping in mind their process and maintainability goals), and the "idea people" need to wait until the prototype is ready before making any new suggestions or changes. Unless something is drastically wrong, or some data proves otherwise, stick with the prototype and discuss when it is ready.

Beyond shortening the "bring me a rock" exchange, a prototype clarifies ideas. When five people leave the idea meeting thinking of a rock, more often than not they are picturing five different rocks! Looking together at a prototype may result in a long list of changes, but the changes are defined and everyone is clear on what is being done and what to expect as a result.

Prototypes are also handy when people do not even know the type of rock they want You might explain the rock five different ways, but until someone sees it, it might as well not exist!

E: "It's a round, smooth, gray rock about the size of a grapefruit."
IP: "I've never seen that before."
E: "I haven't built it yet. Just try to imagine it."
IP: <blank stare/>

Great explanations and ideas will only go so far. At some point it needs to be tangible. And once it is tangible, people can actually engage in the conversation.

Hopefully bringing rocks and asking for rocks will be part of you future. Just remember that prototype type rocks are better. For everyone.

Monday, July 18, 2016

git Permission Denied after reboot

Common issue that I just dealt with again, but wanted to write it down for that time I won't have it memorized.

My machine's git account is tied to the rsa key for github. Company switched to Bitbucket, I created another key to keep the accounts separate, and now whenever I restart my machine git gives me an error like:
Permission denied (publickey).fatal: Could not read from remote repository.
Please make sure you have the correct access rightsand the repository exists.


Turns out I need to add the new key to ssh because it loses track. Some folks have suggested putting this in their bashrc so they don't need to type, but I don't restart the machine too often and typed this post instead.

ssh-add ~/.ssh/id_rsabb

The rsa key is in the .ssh directory, but needed to be added to the console. "id_rsabb" was the name I chose for my bitbucket rsa key. Simple command, enter my key's password, and git + ssh are working together again. 

Tuesday, July 12, 2016

AWS SNS.publish Extended Message Data

Using AWS SNS in NodeJS to send Push Notifications, and we wanted to append extra data to the payloads. AWS has good docs for the plain text message, but trying to use a JSON Message object in the sns.publish call saw an error like:

016-07-12T16:19:09.713Z - error: InvalidParameterType: Expected params.Message to be a string

After some searching, I found the answer in an AWS forum dated back to 2013. Thought I would repost here for easier reference to myself.



The trick was those extra JSON.stringify calls, once to turn the platform messages into strings, and once to turn the entire Message object into a string. Seems like some odd parsing, but I have received notifications on an Android phone, so at least GCM likes it.

Wednesday, June 15, 2016

Redis Ordered Set WITHSCORES into JSON Object (NodeJS)

A note for myself before I stop using this ordered set.

Redis Ordered Sets can return the members of a set with their scores. Its an option on the zrange command, so to get all the values in an ordered set with their scores:
`redis.client.zrange( key, 0, -1, 'WITHSCORES')`

Of course, the returned array is pretty specific, something like:
user1 135000 user2 384000 . . .

Using the handy lodash library, I wanted to keep track of this little hack:
//this is a synchronous operation and would block the event loopvar testScores = _.fromPairs(_.chunk(testValuePairs, 2));

I've seen it done with for-loops that skip every other array item, but I liked the single line of this approach. It does go through the array twice, once to turn into paired array, and once through that array to create the object, but for my purposes the array sizes were going to be small and the succinctness was useful.

Wednesday, June 8, 2016

Team Focus

One of the intangibles of software teams, or any team that is working together on something, is team focus. Focus is vital to keep everything flowing and to meeting expectations, but a lot harder to turn into a metric or graph because it is a soft measure. It is not quantifiable. It is almost a . . .  feeling!
Team Focus is how well a team works together, towards the same goal, and are self-motivated to achieve that goal.

Focus among individuals varies and some need a little more coaching than others. Some will get into "the zone" quickly and may even seem anti-social. Some are too addicted to their phones or Facebook, and that's just an employment concern. Some need the instructions once and it gets done and some need reminding more often, and it still gets done! However, these are mostly all personality traits. While a team should have a few different personalities to avoid freaking out the other departments a la The Stepford Wives or The Borg, an individual's ability to focus is just part of the equation.

The team's focus cannot be measured in the way we can measure tickets, sprints or SLOC. You won't see a graph or color chart indicator. You might think you would be able to deduce your team's focus based on the measurements of your process, but measuring the wrong things or good planning could be factors there. Just look at the components of that "team focus" definition above and see how you would describe your team.

"Works Together"


How your team works together is paramount to being focused, and it is affected by the company culture and personalities. Teammates who are willing to help and explain make a team work better together. Questions are not a bad thing. Whiteboard sessions are a group effort and all opinions are heard. People smile. There are inside jokes at the office about non-project, and non-people things.

To many, it is obvious when a team works well together, and is not "just working." They actually get along, and that is part of keeping them focused.

  • Does your team have a loud-mouth or know-it-all? That does not build morale, and it is already a factor as to why people aren't staying on your team.
  • Are coding opinions bashed as inferior? If the "right way" is not written down (and, if possible, automated by tooling), just deal with it. You can still read the code.
  • Is there any blame or names at bug review, or deploy time, or at demos? Double-check the review process and steps prior to release, because the team released the software.
  • Does one part of the team have the inside track for project decisions, and the others are only informed later? Be careful if these are not documented roles, because saying "this is how it is now" leaves one part of your team on the outside. 
  • Are there two sets of standards? Billy can skip writing a few tests because his work is important and needed right now, but Bobby gets dinged for skipping tests. Susie is made to follow the coding conventions during review, while Peggy doesn't run the auto-formatter. Watch out for this, it is divisive and will chip away at the team focus.

"Toward the Same Goal"


The team knows what it is working on, and you have a process that proves it.  There is business buy-in, so the team knows it's the right thing to work on. The stakeholders will be happy, so the team expects some kudos. Everyone knows where their piece fits into the current sprint/milestone/demo, and why that is important for the future of the project.  This, all wrapped together, is the object of the team's focus.

Know the priorities of your goal. Is it better to build performant or functioning software this cycle? Should the code be good or perfect? Tests too? Can your schedule accommodate "all of the above"? Different approaches to those questions can still look like the "same goal" but if each person has a different answer to the questions, there is a lack of focus.

Also, and this may sound obvious, but do not change your goals mid-sprint/cycle/sentence! If a sprint is defined, don't add to it after starting. If a feature is planned and started, don't add to it until the next planning phase. Changing goals are ludicrous to track. Your team cannot anticipate when new work will suddenly be added. No one can answer the question of "When will it be ready?" The dev team just sees arbitrary buckets of tasks, that can change or be added to, which certainly does not help them focus, and sometimes crushes morale.

"Self-Motivated to Achieve"


Did you team catch the fire? Do they believe in the project? Are they excited to show off the demo? Can they hang their hat on a day/week/sprint worth of work? Would they call it their own?

Like many things, people make more an effort and achieve better results when they are passionate about what they are doing. Being a creative bunch, software developers need a little room to grow their ideas, so don't force opinions all the time. Make important tasks, not just filler or "grunt work"; even test writing can be meaningful. Open up parts of the process and operations to others, to grow their experience and to give more responsibility. Be careful of someone hogging all the hard-stuff/glory/blame - this is a team. Compliment a nicely developed feature, or a sweet algorithm. Tangibly celebrate team accomplishments! (FYI - an email barely counts.)

The more involved someone feels in the team, and in the process, the more meaningful their work becomes, and the more ownership they will take. And they will be self-motivated to reach team goals and achieve higher standards.


While one cannot use their software process to measure team focus, the focus of a team will directly impact a team's results. Hard to measure yet highly important. Strive to keep your team working together toward the same goal, and self-motivated to achieve that goal.



Friday, April 29, 2016

The Point of Process - Part 2


Once the point of process is understood, it is time to measure your current process. This step is important because the point of doing any software process is not "is it good Agile"" or "did we follow methodology X correctly?". The point is: does it work.

A contract of the past is what spurred on this article. We were working tightly with another company, completely ingrained in their process, and my boss would ask me, "How's it going?" My response was mixed, because on the one hand I knew what I was assigned to work and was working on that. That makes a boss happy. But my boss, being a stakeholder in the work being done, was really asking those two questions from Part 1 so he could evaluate the current contract and plan for new ones. I'd tell him, "I'm doing work, but I have no idea where it's going and when we'll get there!" It was a software process that was missing the point.

The same questions that the stakeholders are asking are the same questions to use to measure one's software process. If you have an answer, the process is working, and if it takes more than 2 minutes generate or explain the answers, there is a problem.

Where is the software going?


Every member of the team should know the answer to this question. If one is a Team Lead or Product Manager, the answer needs to go both ways. It goes up to the stakeholders, like Business Dev, Marketing and the C-Suite. If they are not directly planning what the software must be capable of, they at least need to be made aware what is coming next in terms of features and improvements. 

This question must flow down to the devs as well. If they do not know where the software is going,  motivation drops and future features are not planned for. Keeping an air-gap between business and dev stifles ideas from those that know the technology the best, and lowers the meaning of work they are doing to simply "doing what they're told."

When looking at your software process, each member on the team must be able to know, quickly and easily, where the software is going. 
  • Can anyone look at the list/board/spreadsheet/diagram and know what is coming next? 
  • Can they see where their work fits in right now? 
  • Is there a place to write down new ideas? 
  • Are the milestones front and center? 
  • Can the reason of the feature (or entire project) be summed up in two sentences that give meaning to the devs and makes sense to the business side? 
  • What will come next, after this cycle of development?
Use your process to understand where your software is going.

When will it be ready? 


The schedule is vital to a business, so the development team cannot simply stall for time, or make excuses like, "it's ready when it's ready." Demos, marketing, trade shows, and everyone's paycheck is resting on these dates. Any slips need to be communicated. (Any early finishes should be celebrated!) 

A memorable quote by Walt Disney, which I like to repeat, is "Everyone needs deadlines." Walt goes on, but that's the important part. The software schedule sets everyones expectations of "done". And without it there is confusion, miscommunications and unmet expectations. Developers need to fit their work within the timeframe, and cannot go off the path just "because" ("because this new way looks cooler", "because I want to refactor now"). Estimation is an important skill for any team, and fitting what must be done within time bounds is part of the planning process, not the middle or end of the execution phase.

The schedule has to be clear, and your process must make it clear. Having business side stay out of your hair until the software is supposed to be done, and then breaking down the door after a missed date is terrible. Don't work like that! Use the process to quickly and easily inform the business side and guide the development side. 

  • When is this cycle of software development going to be done?
  • How close is the team to being done? How much more time is needed? 
  • When is the code frozen (if not continuous)? When will QA look at it? When will customer see it?
  • Is the team currently on track? How accurate were previous estimations? 
  • When is the next major milestone?

Use your process to understand when the software will be ready. 

If You're Happy and You Know It


What is the ultimate measure of your software process? Whether or not the stakeholders are happy! That's it. If they are unhappy, look in to ways to improve the process to better answer the two questions. (Hint: this likely involves bringing them into your process, not pushing them away). If they were happy, then great! And take a little time to see how to make your life easier.

Stakeholders looking at your software might not be very technical. Metrics and procedures mean very little to them. When they evaluate they are looking at two things:

  1. Did you do what you said you were going to do?
  2. Was it ready when you said it was going to be ready? 

Be able to answer "yes" to those two questions and you will have some happy stakeholders.

But you knew that was coming, right? Because your software process, quickly and easily, has been answering those two questions all along, so everyone knew exactly where the software was going and when it was going to be ready.

That's the whole point of all this process, and that's how you know it worked.

Thursday, April 14, 2016

The Point of Process - Part 1

A software development process was very real during my time in the defense industry, but basically not taught during my time in college. I could see mixed views from the open range of small businesses and development shops, depending where one worked. There is plenty of material out there too! Manifestos, guidelines, workshops and so on, all geared to learning and following a particular software process. And all this truly has made software development better, and allowed for better software.

If one did not have varied experiences, or there was not time to do extra reading, or was just getting starting in software, they might wonder, "what is the point of all this process?" Or if one was getting frustrated with a software process, because it was too much pointless work or the process never told them anything useful, they might wonder, "what is the point of all this process?" And some people just need to be reminded, because the point of a software process is not the process itself.

Software Process Answers Two Questions


Where is the software going?
When will it be ready?

That's it. Be it waterfall or agile or something else, it all boils down to these two questions.

Where is the software going? 


Software needs a goal. The goal could be planned months ahead or days ahead but there needs to be an end state. That could be a milestone with a defined set of features or requirements, or a rolling set of features as business needs change, and the process keeps everyone involved on track. Each person can state why they are doing the things they are doing (writing code, performing tests, etc) and, to a higher level, where those things fit into the entire software system.

When will it be ready?


Schedule is important. The software process tells everyone when something must be done. This helps the business guys plan sales demos and constrains the developers to stop tinkering. If the team cannot state when the software will be ready, chances are it will not be.

Who Asks the Questions

The process is for the software and the software is for the stakeholders. Only minimally is a stakeholder the developer. Sometimes it could be Tech Lead, Scrum Master, Program Manager or CTO. Indirectly it is the end-user. The stakeholder should be the business side of the house, and very often it's the person writing the check. The stakeholders are the ones asking the questions, and the answers to those two questions (from the dev team) is sometimes their only view into the software process.

Development is not the point. In the end, process is not the point either. Keep your stakeholders happy and meet their expectations, which can be done by answering two simple questions. You might use a software process to help answer those questions, too : )

Tuesday, April 5, 2016

Meaningful Software Tests

Software Testing is vital to any project, though not all software tests are meaningful. Not a big deal if a few extra automated tests get written or run. Debilitating if the tests cannot catch what they should, by forcing errors to be found outside the dev team or becoming a maintenance nightmare. So what tests should you write?

Testing Types


There exist about three types of software tests:

  • Unit Test - the lowest level of test. Against a single "unit" of code (a class, a group of methods, an algorithm, etc). One unit is tested in isolation and any other units are mocked to keep functionality going and maintain control of the test.
  • Integration Test - Multiple units tested together, but still the same software system. Can have dependencies outside the software, like a database or even, gasp, an internet connection. Much less is mocked at this level. 
  • End-To-End Test - the whole deal. Testing from one end of the system (user interface) to the other (database, algorithms) and back. Runs real code in real environment with (custom) test runners. Nothing is mocked - it doesn't need to be!

People might use different terms for these types of tests, and different approaches might blend somewhere in the middle of these. Want a long list to amaze your friends,mor give you some good ideas? Here.

But honestly, the type of testing you do isn't the most important thing. That you have testing is important, but varying opinions and success stories will advocate for one type or another.

Is There Meaning In These Tests


The type of test is not the issue, rather it is "are these tests meaningful?" Each test that is written should have some meaning behind it - test out a particular function or flow so you know it works. Test out a particular business need or requirement so you know its covered. Test a particular corner case so you know that bug will not come up again.

You need to be confident in your testing. These are little bits of automated software that will prevent errors down the line (where they are much more expensive to fix). When you run your tests, you need to know the functionality they tested and be confident that functionality was performed correctly.

Two quick examples from the past.

Back in the defense industry, documentation was king. Requirement number to test number to test results. Who wrote 'em, who performed 'em, who witnessed 'em and did they each test pass or fail. If you ever wondered how complex programs with hundreds or tens of thousands of requirements can get approved, its because of the documentation backing it up. Each test was performed for a specific purpose, and that purpose was written down. Anyone looking at the resulting documents would know what was being proved when some test step was being performed.

A second example was a past contract. They had pure unit tests that were never to turn into integration tests. One test for one unit. Code coverage was measured and the number of tests were counted. If you were to ask why we were writing all those tests, it was to get code coverage up and have all the code tested! The bad part was that every piece of code was tested in a vacuum (no database, no outside services, minimal component communication), so when the software verification group looked at something (or worse, they deployed!), lots of things were found because it was the first time those pieces were mingled outside of developer testing. There should not have been confidence that those unit tests were a predictor of safe software.


It Depends


Like many software answers, "It Depends." The type of testing you use and how much you test will be dependent on your goals and business needs. Make sure your code has tests, that those tests have meaning, and your teams understands why the tests are there.

Don't waste time writing tests just to fill metrics. If 100% code coverage is important to, or required in, the project, write those tests! But if you made up a number for coverage and then force yourself to reach it (or even better, just keep adjusting your target number!), you are wasting time. Don't test code just to check off some software engineering list - the number of tests you run is as meaningful as SLOC metrics. Tests need to be updated and maintained, forever, and are written with each new feature. If time is being wasted, find out sooner rather than later!

Meaningful tests will help the team too. Later work load will be reduced by catching real bugs early, avoiding those crazy weekend or late night sessions to "GET IT FIXED!!!1", not to mention freeing up your Verification or QA groups a bit. The dev team will have confidence and faster feedback that what they write will work, or not, boosting productivity. Continuous Integration is not possible without tests you can trust. Writing tests that have meaning makes test writing valuable, leading devs to avoid half-backed work (and insidious false negatives).

One final tip. Tests will have more meaning the closer you can get to real code on real hardware. This is also a more expensive environment to set up, and maybe cannot be automated, so it is another decision to be made. Where you are able, get as close as you can to real code on a real system.

 Happy Testing.



Wednesday, March 23, 2016

NodeJS on EC2 Instance

I needed a clean server to run a quick NodeJS server. We have been using Node for a while now and I haven't set up a server from scratch for a while. Either it's been working and we've just been using AMI images, or I used Elastic Beanstalk, which has done the Node setup already. There are some good blogs or tutorials on how to build node from scratch, but all the building is kind of tedious, especially if you picked a cheap server without all the processing power of the more expensive tiers.

So, to do this faster,

sudo yum update
sudo yum install openssl openssl-devel
sudo yum groupinstall "Development Tools"
That gets a few things ready on the system, and yum is quick. 

Next, I'm using Node Version Manager, or NVM for short. This is just a glorified bash script, though I use it on my dev machine. It's really great. (Especially if your last contract was still using Node 0.10 but pet projects want to use the latest stable version).  Also, it avoids all the source code building, which is the main point of this blog post : )

From the readme (all one line):
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash

And that's it! I exited my ssh session, and came back. Then quickly ran
nvm install 5.9.1

Which worked flawlessly, and I could run node and npm in my terminal. 
$ node -v
v5.9.1
$ npm -v
3.7.3
I personally found that quicker than the other blogs out there. Hopefully it saves you some time as well.

Later Edit: When running after this install, I had an issue where my installed user could find Node (like above) but other users could not. The option I went with was from this blog, running the command:
n=$(which node);n=${n%/bin/node}; chmod -R 755 $n/bin/*; sudo cp -r $n/{bin,lib,share} /usr/local
And to quote the blog:
The above command is a bit complicated, but all it's doing is copying whatever version of node you have active via nvm into the /usr/local/ directory (where user installed global files should live on a linux VPS) and setting the permissions so that all users can access them.

This command would need to be run each time I ran a version. I also had to make sure /usr/local was in the $PATH of the users that needed Node.  And alternative to these extra commands would be to just use the nvm-global version of nvm (a different repo), which I might try next time.  But for now, I'm set up so will get back to it!

Thursday, March 17, 2016

Better Faster and Cheaper Expectations

A common theme is software development is the old adage of "Better Faster Cheaper." Which includes everyone involved in the project, whether they know it or not. We all, especially devs, want our software to be better (less bugs, more features, faster performance, painless DevOps). We all, especially managers, want our software to be done faster (because a business/demo schedule is in place). And we all, especially those with the checkbooks, want our software to be done cheaper (in terms of expense, and software is not cheap!). 



As the adage goes, "you can only pick two." The "best" place in the triangle is probably the center, but that's just equalized. Maybe less surprises and less risk, but everyone is out to optimize something and the focal point shifts for a number of reasons. In some cases, it has to! (or at least should.)

So, where is your project aiming?

Better and Faster, but not Cheaper


Great software delivered quickly. Everyone likes great software that does what it is supposed to do without problems. And there are good reasons to need it fast, like being first-to-market, or iterating features faster than your competition, or your investors are getting antsy. It can be done!

And, it comes at a price. For any part of the project that can be developed in parallel, you can throw people at the problem. Maybe some incentives will keep the team working harder. Buy the good machines and latest tools. Maybe you can hire a real rockstar team of experienced devs, or just buy the technology and services your system is missing.

Whatever the approach, the need for better and faster is outweighing the cost constraints . . . and that's the point! If all those reasons to need it fast are right, and the software is great, you have a winner on your hands and would gladly spend that cost all over again.

Better and Cheaper, but not Faster


Great software at an economical price. Everyone likes great software that does what it is supposed to do without problems. And there are good reasons to need to done cheaper, like lack of investment, changing company profit margins or shrinking development resources.

Without resources, time slows down. Be it lack of motivation or reduced effort, the software can still be good but do not expect it to be done quickly. I see this in a lot of Open Source projects which put out great software, but most of those devs are one or two people who have other jobs and interests. They build and maintain great software, for free (can't get cheaper than that!), but sometimes their Issues list is long or v2.0 has been living in beta-mode for a while.

Progress won't happen quickly . . . and that's the point! Resources are being assigned elsewhere (or never existed in the first place), and the project is still moving forward without burdening cost (time and talent).

Faster and Cheaper, but not Better


Everyone likes great software that does what it is supposed to do without problems. And they want it now! For free! This area of the triangle is what causes developers the most strain. They want to spend more time on that problem, follow good development practices, investigate all the alternatives, re-write that ugly piece of code from last month, etc. - but deadlines and budgets don't allow for it. During my time in the defense industry, I was reminded that I wasn't creating the absolute best software for my task . . . I was fulfilling the stated requirements with the contract constraints that were in place. Talk about strain!

While customers and managers are always monitoring for faster and cheaper, they also want better! This is where a dev shop can fall into problems, by promising better without any means of faster (except "work weekends forever") or eating the cheaper themselves. Having someone "on the dev team's side" that can explain this situation is vital to the health of your team and project.

Faster and cheaper are important and often inflexible . . . and that's the point. When those are the biggest priorities, all parties should be made aware just what part of better is going into Phase 2. There are some times where faster and cheaper make a lot of sense as well, like internal prototyping or feature proof-of-concepts. Better can be handled later, when someone else is paying for it!

Expect All Three


Of course, anyone involved with a software project will say, "I want it all!!!" And they're right, they do want it all. They should have it all too! But the grumpy software veteran will say "nope, you can only pick two" and the uninformed customer will be surprised when the target starts drifting towards one corner of the triangle, only to want to focus on a different corner the next week. The unmonitored devs will push to better every time, especially if they're allowed to just refactor it all again, only to wonder why their schedule is shot and they're rushing at the end. Can all three really be done?

There is no mystical triangle to guide our software, but all decisions will have some outcome. If everyone is expecting the same thing, and the team performs, everyone is happy and not thinking if it could have been better, or faster or cheaper. If, however, expectations were not equal, then someone is left wanting more and pointing fingers at the imagined target on the triangle. "It should have been faster." "What about feature X?"

Yes, we can have all three . . . in a way. If expectations are set correctly and maintained throughout the project, everyone is getting what they expect to get and don't think about the triangle . . . and that's the point! With a defined and limited set of requirements that are perceived as "better", the team can work quickly and cheaply. With a defined budget, that "looks economical" and isn't skimmed, for the right team, good software can be created quickly. With achievable milestones that are "fast enough", good software can be created cheaply.

The triangle is useful to explain compromises while setting expectations. It is not a compass to magically redirect software efforts, nor a map that explains "how we got here is your fault." Set clear goals, keep communication lines opens and enjoy great software that is so much better, faster and cheaper that no one even thinks about it : )

Monday, March 14, 2016

Card Game Project - Deployment

With some working components, and before jumping into the game logic, I want to try out building a native version for at least Android.

Cordova


Building with Cordova worked pretty well. I spent more time installing packages than I did building the app. The best part was the app works and looks great on the device. Special thanks to this blog for documenting their steps as well.
npm install -g cordova
cordova -v
Installing with npm is a breeze. I am using Cordova 6.0.0.
cordova create polycardwarapp com.chimmelb.polycardwar PolyCardWar
This initialized my application in the polycardwarapp directory.
cd polycardwarapp/
cordova platform add android
Build my dist/ directory and copy it to the /www directory of the app.
cd ..
cp -R dist/ polycardwarapp/www/
cd polycardwarapp/
cordova build android
This is where I got stuck. I add the Android SDK tools installed but either never used them, or needed to update them. 
android update sdk -u -a
That command made me agree to many EULAs, and started updating some build tools. I waiting while updating 24, 23, 22, 21 and quit soon after that. Then the build worked flawlessly.
cordova emulate android
Emulator was OK, and installed the apk created at platforms/android/build/outputs/apk/android-debug.apk on my phone. Started up quickly and was responsive. 

PhoneGap


The instructions for PhoneGap building were pretty straightforward, and was more app-based than command line. I followed their instructions for the sample app, copied my dist/ directory into the www/ folder, and the next build worked! 

I was suprised to read this: 
Apache Cordova is the engine that powers Adobe PhoneGap™, similar to how WebKit powers Chrome or Safari. 
...so perhaps that's why these were two positive experiences.

Mobile Chrome App


My first try was the documentation to build a chrome app, included in Polymer. This worked enough to build an app, but the routing in the app didn't work for me and kept getting toasts about not finding certain app paths.

Next Steps


I'm sure Cordova and PhoneGap have many more options, like the icon for the app, portrait-only view models and other cool things. I sometimes get an app routing error when first starting the game, and there could be other errors hiding about that will only be visible in an app format, rather than on the web browser (PhoneGap's tooling would probably be more useful in that case too). The Crosswalk project is also an option (or plugin) that could be useful.

But deployment is possible, and that's what I intended to find out. The game needs a lot more logic, but the layout and user input ("clicking") all worked on a mobile device in an app format. A more formal solution would be needed before hitting an app store, but the biggest risk of "Is this even possible?" has been cleared, so we can continue back to the app!

Tracking

This was another day, though there were some other project hurdles to handle today. And like I eluded earlier, installing android packages was a bigger time-sink than anything else. The actual build tools worked great! 


Friday, March 11, 2016

Card Game Project - Components

There are two components that are custom to this game, the <game-card> and the <card-stack>, which is a collection of <game-card>s.


Cards



The simpler of the two will be the game card. Like your standard playing card, it need a front and a back. The back will have a logo (the Polymer logo), the same on all cards. The front will have text that reflects some rank (in this game, it's basically 1-6 plus 3 special card types).

The first trick is to not show both sides of the card at the same time. I took out the other styling attributes for the css, and just wanted to show the affect of a relatively positioned outer <div> (so it's still affected by flexbox correctly), and two absolutely positioned inner <div>, with classes front and back. The outer div defines the css transition, and that the back of a flipped card can be seen (backface-visibility: hidden;). The front of the card is flipped over with  transform: rotateY( 180deg), and the front and back are both rotated 180 degrees in the same direction to get the affect.

That flipped class is applied with Polymer's special attribute data bindingclass$="front {{isFlipped(show)}}", which binds the calculation of the class to a function in game-card's script called "isFlipped", that is evaluated every time the value of the attribute "show" changes (internally to game-card, or applied to the element from elsewhere). When the class appears or disappears, the css animation is applied.

For now, an "on-click" event is present to flip the cards as needed.


Stacks



Cards in this game are played from stacks (or piles). Two things I wanted to do for stacks was

  1. Proportion the cards dimensions
  2. Fan the cards so I could see them all from top to bottom.


My first step was to make cards that had a respectable shape! Normal playing cards are 64mm x 89mm so we want to keep that proportion. Plus, in the card stacks I want to "fan out" a stack of up to 5 card and still fit in the width of the <card-stack> element (which is always resized with flexbox). A little calculation is needed, and the solution I've used was this (polymer properties removed):


With Polymer, getting the size of elements (or their parents) is tricky if not done at the right time where all the values are 0. Here I used the IronResizeableBehavior to catch an iron-resize event, and if the height is > 0 (meaning it actually rendered), performed some calculations to preserve the aspect ratio. 

Fanning the cards needs to happen from inside the <card-stack>, but outside of <game-card>, so the aspects and behaviors of the cards are independent of the styles applied. Using Polymer's <dom-repeat> template, we can list the cards in an array, and with CSS just stack them on top of each other. This style was very calculated, so this was done with a function bound to three attributes of the <card-stack> (so will be recalculated each time).



Tracking


This step is wrapped up, so I created a branch. https://github.com/chimmelb/polycardwar/tree/step-components

The resizing of cards (and where to bind values in the layout) took about 3 hours, the card flip was about 2, and the fanning elements was about 1. Measuring time is funny, because it really was about a day between meetings, lunch, the other projects that needs attention, etc . . . cramming 6 hours of good work into 9 hours of my day : )

Thursday, March 10, 2016

Card Game Project - Setup and Layout

Setup

For a simple polymer project, their Polymer Starter Kit (PSK) is great. For two reasons:

  1. A Material Design multi-view, single page app
  2. Tooling

This card game doesn't need more than a play area and some menu options for "New Game" and maybe another page or modal for "Rules." Maybe something else, but the key here is simple. You're either playing a game, or setting up to play again. Plus, Material Design looks nice.

The other great thing is tooling. Opinion on the tools to use aside, having one line options to build, test, run and deploy your web site is a good thing. The PSK uses gulp, vulcanize, and  browsersync and a few other goodies that are just good development practices. I noticed this newer version 1.2.3 didn't have jshint, so I added that back in as well to keep my code consistent.  Use whatever tools you want and when your tools work, good stuff happens : )

I also cleaned out a few items I wouldn't be needing. Some build scripts and some of the included example pages, then I added a simple flow for the game "landing" --> "new" --> "game".  "Landing" page allows a new game to be created (maybe continue a saved game in the future), "new" is the setup phase before a game, and "game" is where the game is played. Also edited the title. (This setup step concludes in this commit, if you're curious).

Initial Data Bindings

The basic layout for the game is two columns of 5 cards (actually card stacks), one column for each player. We might need a footer for some game information or setup actions, and there is already is a header bar at the top.

I added a few custom elements to help with the layout. They are not styled and have no logic yet, but instead of building a layout with just <div>s I am one step closer to the read thing. And I want to start building with real data. This maybe not be final data model, but it is enough to get started. 



Each game will have it's current state, player1 and player2, and each player has their stack of cards. I created a <game-board> element and bind the game state to it.

And now my <game-board> element can access its own game property, and changes propagate. That's what makes the binding 2-way: using "{{}}" squiggle-braces when binding in the html and `notify: true` when declaring the property. You can read more in the Polymer docs.

Generally then, the game board has two areas, each with 5 <card-stack> elements, and each card stack element has a list of <game-card> elements (displayed as needed depending on the game or stack state).

Layout

Flexbox is a handy styling tool built into browsers via CSS. You could learn more about Flexbox by watching some videos, and more about Polymer's implementation of it via <iron-flex-layout> in their great docs. Our last project just used classes to define the flexbox properties, so I'm going to try the mixin approach this time (and maybe I can do something fancy with orientation layouts later on more easily).


This is the initial layout of the app. The game-area is a horizontal layout that holds player-area left, duel-area, and player-area right, and each player-area is a vertical and justified flexbox, holding 5 card-stacks. Most all of these mixins were defined in the <style> section of game-board.html, with a little style defined in card-stack.

A similar layout will be used in the "new game" page, a larger and wrapping right-hand area and removing the duel-area from the middle.

Tracking

This step is wrapped up, so I created a branch. https://github.com/chimmelb/polycardwar/tree/step-layout

The layout and setup steps each took about 2 hours and the data binding around 1, and I've been keeping the blog open as I go, so a little time there as well. But I'm not comparing time slices to any perceived speed I should be working at, just writing it down for now.



Wednesday, March 9, 2016

Card Game Project

A project idea that's been in my head for a while has been a HTML5 web game published as an app. Nothing fancy with <canvas> or animation libraries, but something like a turned-based game that was fun enough with simple controls. I backed a Kickstarter for a small, 2-player card game recently that my son and I like to play, so that will be the end-goal of this project.

Along the way I want to look at turning a HTML page into a true mobile app, via PhoneGapCordova or Crosswalk. I also want to track my time and progress; not because I miss my tenth-of-an-hour tracking days in the defense industry, but just to have some data points for future estimates and observe any pitfalls of my own practices.

Technologies


There are a few popular frameworks for building web app these days, along with a myriad of cool javascript libraries, and I can only choose one. I was really interested in trying the Ionic Framework, but it looks like version 2 is getting ready for prime time, which also uses Angular 2. I didn't want to ramp up learning v1 because it might not pass my longevity test, and I didn't want to learn v2 because changes are still happening in both Ionic 2 and Angular 2. Since this is a side-bar project, I want to learn a little, but not new everything, since my goal is to actually finish : ) We just used Polymer in a recent project and I'd like another try at those web components, which I hear will be useful in Angular 2 anyway. (If my app were any more complex, with any of the services "native" to Ionic or the like, I'd want to use that framework instead.)

And at this point, there isn't much more needed than Polymer to get the app out the door.

Road Map


My basic plan for app construction is:
  1. Setup and Layout - get app and dev tooling working, Flexbox the main areas of the views
  2. Components - at least two major components needed for cards
  3. Control - Click events on components or game areas to confirm actions
  4. Logic - data structures and game rules (and AI?)
  5. Animations - Cards and actions look better when things move
  6. Deploy - turn into native app
I might test out the Deploy step earlier to get a feel for how that all works a little sooner. Waiting to try critical steps until the very end, even though you only need them at the very end, is a mistake. 

Waiting to try critical steps until the very end, even though you only need them at the very end, is a mistake.    ~chimmelb
I quoted myself on that, because it applies to small and large projects alike : )

Tracking


I'll get this up on github, and branch each stage as I complete it. I'll blog after each stage and record the amount of time it took. Again, this isn't a race and I'm not under the clock of some schedule, but hopefully will find the results interesting. 

Also, I need to figure out a good way to display source code in blogger, because code samples break up mundane writing ; )

The Series


Will update this section as other posts are written.


Components

Choosing The Right Technology

"He who asks a question is a fool for five minutes; he who does not ask a question remains a fool forever." - Chinese proverb
The development space is changing quickly these days. Open Source software and easier collaboration is producing a large amount of new software libraries and frameworks at a very rapid pace. Which is awesome!

However, you can't keep up. Fundamentally, maybe you can follow the concepts or general principles of the software in question. But getting hands-on time on everything out there? Impossible. Too much to learn, too little time. Maybe you could slice a small piece off the Java/C#/NodeJS/AngularJS{{insert your favorite here}} pie and get really good at it, but you'd be getting further behind everywhere else.

Take the database space for example. How many types of databases could you name? (relational, key-value, document, graph, etc.) How many products or engines could you name for each type you could name? 1-3? Have a look at db-engines.com. Could you name half of the categories on the left? Would you have expected over 200 listings of DB systems? If you needed a database in your next system, which one do you pick? Certainly nothing below ranking 100, right? . . . ranking 50? . . . ranking 10?

And there are many parts to a software system. And each project is different.

To jump to the point, I'll just ask the question I want to answer...

Can you pick the wrong technology? 


No.

Not really, anyway.

You might pick a solution that is overkill for your task. Or maybe one that doesn't do everything you need so you have to bolt on extra bits.

Most software languages are capable, especially with the right library. And most software can be used for a variety of purposes, even if not intended. You see, developers are creative and industrious. They will find solutions no matter how poorly the technologies line up. When it needs to get done, it is surprising what comes together. (Or how the definition of "done" changes ; ) Coming from the defense industry, there is some old and crusty software that has been working for a really long time and continues to perform.

It is better to have an approximate answer to the right question than an exact answer to the wrong one.
- J. Tukey

So while most software languages, frameworks or systems could be used to solve your problem, there could be a "more right choice". The right combination of software can make your life easier, but as of yet there is not a "one size fits all" solution to every project, every time.

Consider the following questions as you start your next project and are choosing the technology. Hopefully these, and the many side questions they produce, can guide you to a "more right" decision and avoid some pitfalls.

How fun will it be?

I ask this first because it's probably the #1 reason you're considering changing software. Read a good scrolling marketing landing page? Did some other company manage to use this software in a cool way? Is it trending on some google search or stackoverflow question list? 

Honestly, these are not bad reasons to try something new. Like I mentioned before, it will probably work out for you (at least initially). But be careful. This question could get you into a lot of trouble and suddenly isn't so fun, yet too late to switch.

Can it do what you need?

Now here's a good question. It involves knowing what your new project is doing, and is especially meaningful if there are boundaries to your requirements. The project that has a fixed list of features has latitude in terms of technology choices than the project that "will start small, but we'll grow it into our business for the next 3-5 years."

Have a web page that will be about 4 pages and 10 components with 5 API calls to the server? Pick any framework you want! Every web framework can do that. How about 10 pages with 15-20 components, that fully drives an API? Better think about what it is you're trying to do before jumping in. 

What can't it do?

The opposing side to the previous question. Identify where the limits exist, and there will be limits. Are those limits crucial to your project? Will it be a problem in the future? (Is that future 2 to 3 major releases away?). Just knowing what problems may come up will help you be ready and help you think more about what you're doing (design). You don't need to solve them all now, but don't go in blind. 

Can you afford to learn?

I worded this question to emphasize that learning is important. Different concepts and approaches will be useful for sure, and also builds the options for following projects (or resumes). 

But learning takes time. What kind of schedule are you under? What is the learning curve? A new framework will have a large curve while a new library or component will be smaller. How much new stuff are you using? New language, new framework, new database and new deploy tools all at once? Careful there . . .

There is a cost to learning and some projects will be able to afford it, while others will not. Is it worth the cost this time? Tried and true technologies that your team is comfortable with will go a long way and add speed and consistency to your development, though be aware of the changing landscape in case your stack gets left in the dust and you're playing catch-up.

How strong is the ecosystem?

Remember how Open Source software and collaboration are changing the landscape of software these days. Use that to your advantage. It's not like a new software technology is thrown at you and you have to go read an O'Reilly book or standards document to even get started.

Is the software documented, and does it contain examples? One command install? Are there blogs? How many questions are there on stackoverflow? How many issues and pull requests on its github pages? These are just a few questions to help determine if the existing ecosystem is mature, because it basically comes down to time. Will it take hours or days to fix that bug or get something setup? Chances are you're not re-inventing anything and the question has already been asked, or the problem already looked at. Think of where you can go for advice.

The other side of the software ecosystem is longevity. Will your technology choice be around as long as you need it? Will updates happen that fix your problems? Will too many updates happen that cause you problems because it's release 0.3.45 beta? It is easy to start a library these days, but will it be maintained?

Of course, if you only use mature and stable software, you might be missing out on getting a technological advantage of trying something new, or miss the chance to be part of something from the ground up. Is that important to you?

Wrap Up

A lot of rhetorical questions flow through my head when choosing software. I do not want to dwell too long on the choice, but cannot afford to blindly choose or forego advice when picking something new. There may not be a perfectly right technology out there, but for your team and your project, it is possible to pick one that is more right than others.

Wednesday, March 2, 2016

Parse to AWS - the Migration

We used Parse.com for a simple integration with Facebook login, and to manage users on a project. Parse was well documented and easy to work with. The integration took about a day, which was surely faster than managing our user accounts or writing our own Facebook authentication. The next week, Parse makes its announcement, which of course gave us some things to think about.

While our Parse usage was small and we had a few weeks before the data went lights-out, what would it take to transfer the service? The intent of this exercise is to determine what is needed and the steps necessary to transfer from real-Parse to an in-house version. The article by AWS made it seem easy, so I'll write here the things I found while reading there.

Server Setup

Planting Beans

AWS made it simple to create a new parse-server-example, with a one-click button to start a new application on their ElasticBeanstalk service. From start to finish it was about 4 clicks, but still really simple.

Note: You need AWS Policy access for this link to work. I added "ElasticBeanstalkFullAccess" to my IAM role group. Otherwise the link doesn't go anywhere useful, which you'll know because you can't select any options on the Beanstalk home screen.

Get Some Mongo

For Parse to work, you need a Mongo database (version 2.6.x or 3.0.x). You could roll your own on AWS, but having someone else do it for you is faster. We went the MongoLabs route for their free-tier. Signup and setup were as easy as expected.

Application Integration

And that about wraps up the AWS How-To. Now that we have a server, everything works right?

Well, not quite. Our app has three components that connect to Parse.com. A NodeJS server (that makes HTTP requests), a javascript web client and a Xamarin client (namely for iOS). Some things need to be addressed: urls and keys.

URLs

Most client SDKs point to "parse.com" or maybe "api.parse.com". How do we point these compiled client libraries to our new server?

Turns out, answers were at the bottom of the parse-server-example readme. There are values in the SDK libraries that allow the URL to be changed, and basically the key is not used on the self hosted parse server.

For our javascript SDK, which was running on SSL, putting in a non-secure URL didn't work, giving a good error like:

Mixed Content: The page at 'https://localhost:5010/game.html#/landing' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://parseserver-xxxxx-env.us-east-1.elasticbeanstalk.com/parse/login'. This request has been blocked; the content must be served over HTTPS.

To turn on SSL on the beanstalk server, you need a certificate. AWS Certificate Manager can help sort those, and the Load Balancer configuration in the Beanstalk console can turn on port 443 listening using one of your valid certificates. 

Keys

The client applications are given keys by parse, typically for their own specific SDK. The "javascript key" allows javascript access on our web client, the "REST key" allows HTTP access using our server client, and same for Xamarin. How are those included in the Elastic Beanstalk instance, when the only environment variable is for the "Master Key"?

Also, the Facebook app ids need to live on the server somehow.

In the parse-server readme, there is the option to use environment variables which could be configured via the Beanstalk console. Whether the AWS server example respects them when the instances refresh remains to be seen.

Wrap Up

I'm stopping this investigation at the certificate chain since we don't have a wildcard cert, and SSL is needed in our app (so mixing non-SSL isn't an option for me). It would be possible to keep going, and I think the entire parse server would function as a drop in replacement with minimal code changes, but I'll have to revisit this at another time to completely integrate to our existing app.