7 заметок с тегом программирование

As an iOS engineer, for the most time in my career, I have been “transforming JSONs into beautiful UI”. Compared to backend development, handling large amounts of data and doing performance optimizations are not a typical part of our work. Of course, from time to time, performance does matter — especially to make the UI smooth, but the techniques are often different — such as reusing views or offloading expensive work from the main thread. Additionally, if we want the client to be thin, most of the heavy job is delegated to the server, for example, content ranking, search, filtering and so on.

However, sometimes you still have to perform some expensive operations on the client side — for example, because for privacy reasons you don’t want some local data to leave the device. It’s easy to accidentally make those parts of code extremely inefficient — especially if you haven’t built this muscle of quickly spotting potential complexity issues yet. Algorithms and data structures do matter — this is something I only truly realized only several years into my mobile career, and I still see this thing to be often overlooked in the industry. Of course, early optimization is not needed and may even do harm (see premature optimization), but even basic calculations can become performance bottlenecks that severely harm user experience.

There is only one way to solve it — embrace the basics — which means using appropriate algorithms and data structures for the task at hand. One real example that I always recall is a thing I built for one of my projects many years ago. For an invitation flow, I had to implement a contact merge feature where the data would come from three different sources — backend, social account and local iPhone address book. We wanted to combine contacts from these sources into one if they had any overlapping channels (phone numbers or emails). The result would be an array of contacts with all their channels, so there would be no duplicate channels for two different contacts.

At first, my naive approach was just to go one by one and see if in the remainder of the list any contact has overlapping channels with the current one, merge them if yes, repeat. This was needed because, for example, the last channel in the list could have two channels — one that would overlap with the current one, and the second one that could have appeared in the previous contacts which would mean having to go through the list again.

I implemented this, and it worked pretty reliably, here is the pseudocode:

func slowReliableSmartMerge(contacts: [Contact]) -> [Contact]  {
    var mergedContacts = contacts
    var results = [Contact]()
    var merged = true

    while merged {
        merged = false
        results.removeAll()

        while !mergedContacts.isEmpty {
            var commonContact = mergedContacts.first!
            let restContacts = mergedContacts.dropFirst()

            mergedContacts.removeAll()

            for contact in restContacts {
                if contact.hasNoOverlappingChannels(with: commonContact) {
                    mergedContacts.append(contact)
                } else {
                    merged = true
                    commonContact = Contact.mergedContactFrom(contact: commonContact, otherContact: contact)
                }
            }
            results.append(commonContact)
        }

        mergedContacts = results
    }

    return mergedContacts
}

An experienced engineer would quickly spot the issue here, but plese bear with me for a minute. I tested this on my device which had roughly 150 local contacts, 100 friends on social media, and a couple dozen users from the server. It would finish in just a couple of seconds after showing a spinner — “not a huge deal” I thought and moved on to the next feature. Test devices had much fewer contacts, so it worked instantly there. Then a couple of weeks later we started getting some reports from the users that this spinner can take a minute or even longer. Suddenly I realized that the issue was related to complexity, and then I figured that the approach I had taken could actually hit the O(n^2) complexity — similar to the bubble sort.

I quickly discussed that with another engineer on a whiteboard, and we came up with hashmaps to optimize this significantly:

func smartMerge(contacts: [Contact]) -> [Contact] {
    var channelToContact = [String: Contact]()
    var contactToChannels = [Contact: Set<String>]()

    for contact in contacts {
        var mergedContact = contact

        for channel in contact.allChannels {
            if let matchingContact = channelToContact[channel] {
                if mergedContact !== matchingContact {
                    let mergedMatchingContact = Contact.mergedContactFrom(contact: matchingContact, otherContact: mergedContact)
                    contactToChannels[mergedMatchingContact] = (contactToChannels[mergedContact] ?? []).union((contactToChannels[mergedMatchingContact] ?? []))

                    if let channels = contactToChannels[mergedContact] {
                        for c in channels {
                            channelToContact[c] = mergedMatchingContact
                        }
                    }

                    contactToChannels[mergedContact] = nil

                    mergedContact = mergedMatchingContact
                }
            } else {
                channelToContact[channel] = mergedContact

                if contactToChannels[mergedContact] != nil {
                    contactToChannels[mergedContact]!.insert(channel)
                } else {
                    contactToChannels[mergedContact] = [channel]
                }
            }
        }
    }

    return contactToChannels.keys
}

The eventual complexity was linear, the spinner would just flicker for a split second, and the tests were luckily green.

Since then, I’ve always been much more alert when it comes to doing some computation on the client side that potentially can have a variable-sized input. This all seems to be very obvious to me now, but back in the day this didn’t look too important to me. I think having a proper understanding of the complexity that comes with various algorithms and data structures can make you a much better software engineer which will lead to better products you build. After all, this is how the big tech companies hire — they value coding skills more than knowledge of certain frameworks.

These days, it’s also important for new folks who switch to software engineering from other areas — they often start their career with simple projects that involve UI work or simply connecting the stuff that’s built on top of well-known frameworks. I’d encourage them to also master the core things like algorithms in order to excel at this job.

devstory   ios   swift   мобильная разработка   программирование   разработка

Disclaimer: the ideas in this article are based purely on my personal experience working in big tech and smaller companies over the years and multiple conversations I’ve had with other people working in FAANG. Not all big companies out there do the things I describe here, and some smaller companies don’t adopt such practices either.

Smaller tech companies often get inspired by what the big companies like FAANG do — how they manage projects, organize the office space, hire the talent and write code. While it can be useful to leverage some of the best practices, I believe there are things that can actually bring harm if followed blindly. Let me describe several things that I find counterproductive in the small company environment but which are still often used because the grown-ups do it.

1. Interviewing

The traditional interview at a big tech company is a standard mix of coding, system design and behavioral sessions. This is what Cracking the coding interview book is about. Also, this is what leetcode is all about. As a result, lots of engineers try to get trained at solving algorithmic and data structure riddles, and then they never rotate trees nor find the shortest path between two nodes at work.

But does it have to be like that? Big companies do it this way because they value pure problem solvers that can adapt to any framework or tool — in big companies, it’s often an internal framework that’s not used anywhere else. There can also be teams that work on a new language or a new cutting edge technology. It’s believed that the person who can reliably solve algorithmic problems and understand the complexity also can perform well at any kind of programming job that they will face at work.

For smaller companies, what they usually need is work on a product that’s developed using a framework well-known in the industry. For this reason, interviews can totally be done as a test task that is much closer to the actual job. Pair programming sessions can work great too — especially if the goal is to find a teammate that would perfectly fit into a small team.

2. Building own tools and infrastructure

I’ve seen some smaller companies trying to deploy their own git or hg repos or set up fully custom CI pipelines. Big companies often build tools from scratch for the following reasons:

  • Such tools didn’t exist on the market when they already needed them.
  • They need some custom features that the majority of the market doesn’t care about.
  • They don’t want to depend on other services that can go down unexpectedly.

In my opinion, smaller companies shouldn’t spend too much time on re-inventing the solutions that already exist on the market. They can just compare those solutions and choose the one that provides all the necessary features, is reasonably priced and has a good reputation:

  • Do you need to store your code in a versioned system? Probably Github or Bitbucket are your best options.
  • Do you need a continuous integration system? Then maybe use Gitlab or Github’s CI functionality.
  • Maybe you need a mobile CI/CD pipeline? Use Bitrise or other specialized platforms.

Using an existing battle-tested service usually saves a ton of money and allows people to focus on the actual work.

3. Heavy process

Big companies introduce heavy process (such as required system design or product reviews) because of:

  • Their scale — when there is a long chain of people between let’s say a director of product and a product team, they want to make sure the individual team doesn’t go rogue and build something very different from the high-level vision.
  • Fear of shipping something wrong — this fear is even greater than the prospect of shipping a breakthrough, that’s why it often seems to be easier to introduce a process that’s supposed to protect against potential screw ups.
  • Paper trail — some people have to be kept accountable and responsible for the decisions.

All these things are not always needed in smaller companies where all the people usually know each other. In such an environment, it’s often reasonable to trust people over process. For example:

  • Two people on a project don’t need a daily stand-up — they can just informally sync throughout the day (maybe even async).
  • A product designer on a small project doesn’t have to take all the wireframes to a design review every time there is a change — instead, after there is an initial alignment, they can evolve the design independently.

Conclusion

I hope these ideas will help some teams look again at the way they work and try to think if they do actually need it to mimic what the big companies do — or they can get rid of things that make them unnecessarily slower and focus on what really matters more to them.

If this article gets enough traction, I will follow up with the list of three things that are used in big tech but for some reason are often overlooked in smaller companies.

мобильная разработка   программирование   разработка

Innovation comes from trial and error. Scientific breakthroughs are often a result of countless experiments. The successful inventions are born out of countless attempts — just like the Edison light bulb.

It looks like the same principle can be applied to digital products. Luckily, in software we have a true luxury of being able to run experiments in production and learn from them without a huge cost overhead. Compare it with the construction business where a building can be built only once, then possibly tweaked just a little, and all the learnings can be applied only in the next project. However, in startups and small companies the experiments are surprisingly not that common — while in some large companies the whole established product can be an automated A/B-test machine. Let me share some points which should be kept in mind while doing experiments to ensure a successful outcome. I hope that even if you are not doing experiments now, you will see why they can be useful for you, and what to pay attention to.

1. Have a clear hypothesis

This is the most important tip. Don’t even start unless you have a clear understanding of what the proposed change should bring, and ideally what the action plan should be after the experiment concludes.

Good example:

If we reduce the amount of steps in the onboarding flow, more people will finish it and start using our service. If this turns out to be true, we will roll out the shorter flow.

Bad example:

If we redirect some of our app users to our website, more people will use our service.

Why bad: not really clear if more usage comes from the redirect itself — maybe it’s people who have to continue a critical task on web + there is no clear action plan — should we wind down the web version, should we prioritize the app, should we redirect on certain pages, etc.

2. Adjust for the stage of the product

If your product is rather established, you can run smaller experiments where each of them can move certain metrics. Then you iterate and gradually improve the whole thing. There is almost certainly no point in making drastic changes that can bring a lot of chaos in individual metrics.

If your product is still looking for product market fit, then basically every change can be big enough to steer the whole thing in a new direction, especially if there are not too many users. That means many decisions should be rather driven by your product vision and intuition, although you should still make informed decisions and measure the outcomes. And again, see the first point above — always have a clear hypothesis and an action plan.

3. Running a controlled experiment is better than not running it at all

Often some people on the team can be skeptical about some proposed changes, for example, having a new payment method can bring a lot of new purchases, however, this can be not enough to justify the flat fee that you’d need to pay for the integration. Running an experiment on a small number of users and proving it in a real life scenario is much more valuable than doing such projections, especially if the internal conversation seems to be stuck, and there is no clear path forward. If the implementation cost is not massive, probably the best move here is to get a buy-in from leadership and be clear about the hypothesis and the action plan depending on the outcome.

4. Have the right set up and tools

Always make sure that:

  • Test and control groups have consistent experience — i.e. a user from the test group will always have the test experience during the lifetime of the experiment.
  • Results are statistically significant — you can use some online tools to verify you get a proper result, not just random noise. Also keep in mind that the bigger the metric move you expect, the fewer participants you need to prove it — and vice versa.
  • Metrics are correctly calculated — meaning that you can reliably measure the outcome for the test vs control groups.

5. Be aware of other experiments that can affect your experiment results

Some experiments can be affected by external events — such as seasonality, or an operating system update, etc. Other experiments can lead to a different outcome because of other tests being run simultaneously. Try to avoid this by making the experiments either smaller or isolated from each other.

Example:

You introduce a new login screen, and also introduce Sign In with Google button.

Probably the best way here is to split into 4 independent groups (two-by-two: old/new login screen and with/without the Google button), and analyze accordingly.

6. Even negative results are a good learning

Sometimes people are afraid that their experiments can lead to worse results. I’d say this is still a positive learning because if the experiment was run in a controlled environment with a clear hypothesis, you don’t affect real users much, and most importantly now you know that some of your ideas won’t work out, and you can safely put it away until better times. Just treat it as a lucky scenario vs doing the same change because someone strongly believed in it and didn’t even run an experiment.

If all these points are taken care of, then hopefully the experiments should provide useful learnings to make your product better. Please share any interesting experiments you’ve run and what eye-opening insights they’ve brought.

Note:
Recently I’ve started writing more frequently about software engineering and mobile development in particular — mainly in English language — just to capture some thoughts that I find important, and some of them would hopefully be useful to other people too.
* * *
В последнее время я начал писать чаще об IT в целом и мобильной разработке в частности — в основном на английском языке — чтобы поделиться некоторыми мыслями, которые важны и интересны мне. Я надеюсь, что они будут также полезны и другим людям.

и как его исправить

Apple Watch — невероятно крутой фитнес-трекер. Пульс, калории, пройденная дистанция — всё это считается само и синхронизируется с айфоном. А два механизма — ежедневные цели и шаринг активности с друзьями — грамотно мотивируют не халтурить.

С часами стало комфортнее тренироваться. Раньше, чтобы выйти на пробежку, одновременно послушать музыку и потом просмотреть итоги тренировки, нужно было брать телефон и втыкать в него наушники. Но тренироваться с современными телефонами и проводами неудобно, к тому же невозможно следить за пульсом без дополнительных устройств. Теперь для полного счастья достаточно взять часы с беспроводными наушниками.

Но есть один косяк — из коробки часы поддерживают лишь небольшой набор тренировок:

  • ходьба,
  • бег,
  • велосипед,
  • плавание,
  • эллиптический тренажёр,
  • гребля,
  • степпер,
  • «остальное» (Other).

Игровых видов спорта нет совсем. Я занимаюсь спортом шесть—семь раз в неделю, и ни одной из моих тренировок на часах нет:

  • теннис,
  • баскетбол,
  • традиционная силовая,
  • функциональная,
  • кросс-фит.

Чтобы всё это трекать, я поначалу использовал категорию «остальное». Но калории в этом режиме считаются неправильно, так как используется тот же принцип подсчёта, что и при быстрой ходьбе.

В АппСторе оказалось много приложений для разных видов спорта, и это натолкнуло меня на мысль порыться в документации HealthKit — фрэймворка для работы с тренировками и данными о здоровье. Оказалось, что в SDK доступен трекинг 70 (!) видов спорта.

Правда, почти все эти приложения какие-то кривоватые

В итоге я решил написать минималистичное watchOS-приложение, в котором были бы только нужные мне тренировки с возможностью легко добавить любую другую. Приложение получило кодовое имя «Just Do It», потому что там нет даже целей (мне и не нужно). Есть только выбор спорта и вывод основных показателей в процессе тренировки — время, калории, текущий и максимальный пульс.

Ну и главное — я хотел научиться писать приложения для часов. В итоге получилось вот что:

В комплекте идёт iOS-приложение, через которое можно стартануть тренировку на часах.

Исходный код лежит на Гитхабе.

Если вы пользуетесь часами и хотите добавить свой вид спорта, а также у вас есть Мак, Xcode и базовые навыки программирования, то это делается с помощью нескольких строчек кода в WorkoutConfig.swift.

Занимайтесь спортом! :-)

apple   apple watch   devstory   ios   iphone   swift   watchos   мобильная разработка   программирование   спорт
Ctrl + ↓ Ранее