Working with an external agency (or even an in-house team) is not always a smooth and easy process. Whenever we ask designers to translate complex business requirements into beautiful and useful products, we often meet the most common barrier – the transfer of knowledge. So, how can you – a person with extensive business knowledge about your company – pass all of this information to the people who need it? How can you make sure that the design they create will meet your needs perfectly? The workshop is the answer.
Well, the sad fact is, none of us are mind readers, including designers. We are, however, experienced in extracting important information and knowledge from stakeholders and turning them into actionable points. This may almost sound like magic, but our most effective secret weapon is actually the humble workshop (and some research too).
A workshop, in contrast to what some people may imagine, is not about playing with post-it notes or brainstorming ideas. It’s a well-defined and controlled process that involves ideation, knowledge sharing, decision-making and prioritizing work.
And the benefits of this process are pretty huge.
THE BENEFITS
For example, well-conducted workshops:
But, of course, these benefits also depend on the persons involved in the process. Choosing the right set of people helps us make the most of it.
To get the best results, a workshop should include between 3 and 10 participants along with an experienced facilitator whose job is to make sure that everything goes smoothly – so that there are no ineffective discussions or disputes, and everyone is working towards the same goal. There are also two important rules to follow:
Plus, making sure that you are well-prepared will help you make the most of it, no matter how experienced you are.
The most significant part of preparation is to first clarify the most important goal of the workshop. Since this is meant to be an effective and time-saving measure, you won’t be able to do absolutely everything in these few hours. However, if there are many topics and challenges that you would like to cover, you can always carry out a few workshops. So, how do we handle this ourselves?
As you can see, the entire process is very different from a typical brainstorming session during which people are usually just tossing ideas around and the most charismatic person in the room gets their way. A good workshop is all about collaboration and working with various ideas to create something that works best for the determined goal.
However, we typically run a few different types of workshops in order to achieve the best results.
As stated above, each workshop can vary depending on its goal. We typically start by identifying a few important stages in the project, and then we establish when a workshop would be the most beneficial, and figure out the most effective way of working. Even though we customize workshop activities a bit based on individual goals, we also have some repeatable formulas to work with. Balancing customization and process automation is the key to success!
1. KICK-OFF WORKSHOPS
The main purpose of this initial workshop is to facilitate knowledge sharing and define the scope of processes at the beginning of every project. During this phase, the partner’s main interest can often be summed up with: “I want to develop an app with intent, but I also want to make sure they (the designer) understand what I do and what I want”. Well, worry no more, as this workshop is the perfect way for the partner and the design team to achieve a common understanding.
The kick-off workshops provide answers to a lot of questions, such as:
The kick-off workshop is also a great way to get to know each other and encourage team spirit. Some techniques that we can use during this event are: proto-personae, impact mapping or the impact vs. effort matrix for prioritization.
Results & benefits:
2. DESIGN SPRINTS
“I have a bold idea for a product, but I’m not sure if it is going to work client-wise or business-wise”. Ooh, this is a topic we love! Big challenges are our thing. We have an effective and tested method of tackling them – design sprints. This type of workshop was founded at Google and has been used by the most innovative companies in the world, such as LEGO, Slack, Headspace, and many more. This comes as no surprise since design sprints are really helpful in navigating the most complex challenges effectively.
Design Sprints last a bit longer than ordinary workshops, as they require 4 or 5 full days of work. During the first 2 days, our team and the partner’s team spend time on-site (or they can also do this part remotely), collaborating on ideas that are to be tested with real users within the next 2 or 3 days.
The formula is simple:
At the end of the design sprint, our partner has a working prototype as well as an answer to the critical question: will this idea work?
It’s a fun and, more importantly, an efficient way of validating business assumptions. You don’t spend weeks or months building your product only to find out that no one wants to use it at the end. Taking 5 days to test an idea at the beginning sounds much better than wasting weeks or months, doesn’t it?
Results & benefits:
3. PRODUCT IMPROVEMENT WORKSHOPS
What if you already have a working product, but want to make it even better? The answer, as you may have already guessed, is to hold a workshop! The biggest challenge with further developing an existing product is in toeing the fine line between improving on a good product and instigating so-called feature-creep.
The most common case of this for startups and more mature organizations is: the more people engaged in product development, the more ideas for its improvement and new features. But how can you distinguish between great ideas (that will have a real impact on user experience) from poor ones (that will just make the product more complex without any added value for the end user)? Of course, the main tool for prioritization should be a clear roadmap. But in order to create one, we need to hold an effective workshop to gather different stakeholders and discuss their expectations and ideas around the product.
This kind of workshop can also act as a great tool for discovering new business opportunities. When you map out your partner’s experience, from A to Z, you can easily spot bottlenecks and pain points that you can try to address in the future.
Some techniques that we use during this workshop are: the Customer Journey Map, Scenario Mapping, Design Studio and some prioritization techniques.
Results & benefits:
4. BUSINESS WORKSHOPS
Many people have already discovered that when they build a new business, they don’t have to spend days or weeks creating a business plan in an Excel spreadsheet that, in the end, will have nothing to do with what they actually need for their business. There are many lean methods that can help you quickly prepare a business plan to test and validate.
These methods – such as proto-personae, the Value Proposition Canvas or Business Model Canvas – are perfect for a one-day workshop. Just gather your (future) teammates and work together on creating a business plan from the ground up – a plan that you can later validate with research and prototypes.
Results & benefits:
The answer is: yes, definitely. As you can see, you can solve almost any problem more effectively with just a one-day or a two-day workshop. The types of workshops mentioned above are just suggestions, but you can definitely organize them however you want. There are many sources of great workshop techniques that you can find on the Internet, so just do the research.
The most important thing is to make sure that you have a clear goal and plan for the workshop before jumping into it. This will save you a lot of time and guarantee that you will meet your challenge or find some spot-on ideas.
And of course – if you need any guidance on organizing a workshop for your company, don’t hesitate to contact us!
When starting a cooperation with a software development agency, one of the most crucial decisions that must be made right off the bat is the selection of the appropriate pricing model. It's pivotal as it will have a huge impact on the whole project: processes, time-to-market, price, client's involvement and his comfort of work.
Up until recently, companies were choosing mainly between fixed price and Time and Materials (T&M) pricing models. Each of the models has its own strengths and weaknesses, and is appropriate for different project methodologies.
After a decade of working closely with clients from all around the world, learning how this business of expectations vs. outcomes work, we have developed and incorporated a new pricing model: Full Time Equivalent Engagement (FTE Engagement).
Allow me to detail the two most common pricing models first, and then, by comparison, show how FTE Engagement works and we think it's best suiting our culture and philosophy.
Choosing a billing model for a project does not happen in a vacuum and out of context. It should be a consequence of choosing a specific software development methodology along with its processes and deliverables, be it waterfall and agile (regardless of whether we are talking about kanban, Scrum or Extreme Programming).
For the sake of simplicity, each IT project consists of roughly 4 stages:
In case of waterfall, we move on to the next phase only once the former is finished, documented and signed-off. In case of agile, by contrast, all stages are run in parallel with sprints that tackle specific problems or functionalities, followed by the release of the latest working version.
The pricing model one chooses must correspond with the software development methodology. If you work in the waterfall model then you are probably doomed to use the fixed price contract. If, on the other hand, agile is your thing, you have mostly Time & Materials or FTE Engagement to choose from.
Fixed price means that you agree on a single, predefined sum for a specified scope of work within a certain period of time. To give you a real-life example, it’s like when you order a birthday cake for your daughter’s birthday party. The scope you order from the bakery around the corner is a strawberry cake with lemon icing on top and your daughter's name written all over the top with white chocolate. The deadline is two days from today and you agree to pay $35 for the service. Sometimes in advance, too. This is what a fixed price model is in a nutshell and it works pretty well for non-complex errands. As long as you receive exactly what you ordered and that's not always the case, sadly.
Pros | Cons |
- Known price means no surprises - Known timeline and set deadlines - Little to no involvement in the development process from the client's side | - Long process of planning and documenting each stage - Focus on delivering agreed scope of work, not on a better product - High risk of having product that doesn’t fit the market - Very hard to make changes during and after the project once it starts - It is difficult to change the product quickly in case of market or legal requests |
So far the fixed price contract sounds great, doesn't it? At least on paper. You know well what you will get, when and for how much. No surprises. Unless, that is, it turns out the product isn't what the market or its customers wanted.
So what could possibly go wrong?
Software development in the waterfall methodology means that an application must be designed, detailed and described along with all its features and components before it can go into production. Every little element is going to be priced and documented separately. This means three things:
On top of that, once software developers start working on coding wireframes, few things might happen:
The worst thing, however, is that the fixed price model forces developers to deliver the final product based on the detailed scope of work as accurately as possible, rather than delivering the best possible product. Due to the fixed price model's limitations developers are rarely involved in the project before the development stage so their concerns and ideas are not taken under consideration. And when they are finally given a stage to perform, it's usually too late for any changes.
Those are the reasons why most of the software development is being done using agile methodology.
Time and Materials is an approach in which clients are billed for the time dedicated experts spend on developing the product based on hourly rate that has been agreed on in the contract.
Pros | Cons |
- Flexibility of agile development - Better Product-Market fit - Work on the product itself starts earlier - Lower risks of potential failure | - Timeline is difficult to predict - Costs are difficult to predict - Higher involvement from the client side in the development process |
Time & Materials model allows one to work in sprints, where it's often the client who decides on the next feature that will be designed or developed from iteration to iteration. Each sprint is followed by a demo of what has been built and allows validating both UX/UI and performance.
Each successive sprint is planned based on a greater understanding of the problem, which results in a product better adapted to the market realities and user expectations.
In this regard, though, T&M can require higher involvement in product development from a client. It may seem like additional workload, but it isn't from an outcome perspective. If one calculates the amount of time and energy that might be needed to fix a product that doesn’t meet market needs, that initial involvement really pays off.
Unpredictable costs are another worry with T&M. Customers fear that one day they will be surprised by an exceptionally high invoice they weren't ready for. Or, similarly, that it will be difficult to predict the ongoing monthly costs, keeping in mind that the project can change its course and its timeline a few times as it goes. If this happens, it gets harder to allocate and plan the project's budget and the costs can grow exponentially.
Before moving on to the FTE Engagement pricing model, lets go back to the question, whether it makes sense to combine fixed price model with the agile approach.
The scope of work often changes in the agile methodology. We add features, modify them, change the order of features developed as we go and see best for the product and its final outcome. Now imagine that every time we make even the smallest change we would need to price it and produce a detailed documentation before moving to the development stage. It would be the opposite of agile and efficiency which makes agile and fixed price an unlikely duo.
Most of intent's past projects had been carried out on the basis of the Time And Materials (T&M) billing model. At some point, however, we decided this does not fully reflect the transparent nature of how we approach co-operation with our partners and clients. Therefore, we have created a new model to meet our own, high standards: full time equivalent engagement (FTE Engagement).
In short, FTE Engagement means hiring specialists exclusively and paying for the full Man Days that these specialists have dedicated to your project and its end-result. We call such specialised teams pods. Their strength lies in the fact they can be deployed quite fast and with industry-specific expertise depending on the project, adding tons of value to every project (be it extensive customer knowledge, tech consulting, Go-to-Market and Product-Market fit experience of the pod).
Let me elaborate.
Pros | Cons |
- Predictable monthly costs - Dedicated and exclusive team of specialists - Higher efficiency due to the lack of task and project switching - Far greater and industry specific know-how of the team | - Requires a higher dose of trust in a partner - Deadline may be harder to predict |
FTE Engagement is our proprietary concept that gives one the most significant advantages of employing someone full-time without the burdensome disadvantages of having someone on his/hers payroll (sick leave, social security, medical insurance, holidays and restrictive labour laws to name just a few).
In reality, this means that our developers only work on one project at a time. They devote their full attention for 8 hours a day and 20 days a month to one client, which is rarely the case in a T&M model. They do not lose efficiency or focus, time and attention switching between projects doesn’t occur and, thus, they can go really deep into understanding key elements of a client's project and that project only.
What is equally important is that our partners most often need from a few to a dozen developers for their complex and long-term projects. Due to the fact that developers usually work on one project, we establish teams of specialists that have already worked together and are industry experts, which makes the whole development process way more efficient and smooth. One can consider those teams as his/hers dedicated and exclusive product delivery team, just not in-house. They do, however, feel responsible for the product's success, as all owners should.
FTE Engagement has one more advantage over T&M. It makes each month's deliverables and costs predetermined and they can't be exceeded. Simply because each specialist will dedicate no more than 8h per day, which gives 160 hours per month. It makes it easier to estimate the value of the project over time and there are no surprises, which is what clients sometimes fear when working in the T&M model.
However, this billing method requires a lot of trust between a client and a tech partner. At the end of each sprint you will not get a detailed report on how many hours of work each feature took. We rather encourage our partners to take part in demos that show the progress of work on an ongoing basis and allow them to test the product on the go.
One should never think about pricing models in a total isolation from project management methodologies and should always consider pros and cons of each combination.
Fixed price contracts might be good for some very simple, predictable projects with limited features. In all other cases, it is better to opt for an agile approach that leaves you with either T&M or FTE Engagement.
Time and Materials is a very convenient pricing model, but in our opinion FTE Engagement has significant advantages:
If you would like to discuss details of your project and how we can go about it, let us know through the contact form.
Human-computer interaction, human-centered design, user experience (UX), user interface (UI) design and, among them, product design are more than simply buzzwords. The mouthful of the industry’s terminology, however, can be overwhelming. We are constantly being flooded with a multitude of definitions, names of professions, processes and terms that don’t tell us much. Even the oldest design lions often have problems with an unambiguous definition of what they are doing on a daily basis. This article will clarify what product design is and how we use it at intent.
Just a few decades ago “to design a product" meant simply that: creating a blueprint of a commodity such as a chair, knife or a teapot. With the appearance of computers and the development of information technology, the term “product” went beyond the well-known material world. The emergence of new technology helped setting a course for something that is intangible but still invaluable to users and businesses alike: a digital product.
The first (or at least the most popular) reflections on creating digital products and their importance in the human-computer interaction were those shared by Donald Norman. His book “The design of everyday things” resounded throughout the designers’ world, highlighting the importance of the experience that accompanies users while they interact with computers. This phenomenon wasn’t to be ignored, according to Norman. Giving more attention not only to systems’ functions, interface appearance and information architecture but also to users’ feelings, impressions, touchpoints and behavior, effectively contributed to the coinage of terms such as user experience and user experience architect. The latter is how Don Norman called himself and the folks from his design team at Apple back in the 90s.
Soon after, user experience started to be considered an essential element of creating websites, systems and applications. And rightfully so. It helps to design those products in a way that is not only meeting business goals but is also a response to the needs of real users. And as a truism goes: a happy customer is a loyal customer and those, supposedly, generate the most income.
People who are in charge of designing the best possible solutions at the intersection of businesses and users’ well-being alike started to be called User Experience (UX) designers. Often mistaken for graphic designers or User Interface (UI) designers, UX designers are those who aim at being users’ advocates but also at the same time at bringing value to business that stands behind every product. Even though the work of UX designers is not as obvious and tangible at first sight as UI designers’ work that creates the visual layer of a product or service, they both have significant impact on the success or failure of anything we want to build. UX designer’s role is especially crucial when it comes down to whether or not a product will generate return on investment as well as how functional and usable it will be. As more and more companies notice the important role of UX and its value for the business, they distinguish two types of product designers:
UI and UX designers, through close collaboration, are trying to execute all above with the support of workshops, user testing, research and iterations. Of course, designers’ competences sometimes overlap and there is no strict line between one another. A UI designer can have a great sense of UX and a stellar skill set in prototyping. A UX designer, on the other hand, can have a crush on graphic design or be keen on user testing and meticulous research.
Every professional has their specific preferences and skills that define their role in a given project - and it is always unique. The trick is to adjust the resources and design strategy to the product we want to build, its goals, budget - and time restrictions. To make things even more complicated, we have been recently observing a rising popularity of a new job title, namely Product Designer, that extends the scope of UX designer’s responsibilities as it aims to cover all aspects of one’s experience with the product, including monitoring the product’s life-long position and sentiment on the market. However, as it often happens with theories and definitions, it may be a bit controversial. As you can see, however, a proper design of digital products is an outcome of a team's massive effort, with different but equally important skills, rather than individual performance.
Throughout the years, the UX design community started to notice that while taking care of users’ experience, we should really be thinking about the whole process of research, designing and launching a product which consists of not just the “design” work, but much more: defining and redefining business goals, market research, user testing, and even communication (copywriting, tone-of-voice) or marketing (branding).
The aforementioned sir Don Norman highlights that the term “user experience” is much broader than we used to think. According to him UX starts with one’s first interaction with a product, its purchase along with the accompanying journey and, finally, the usage and product related impressions. This is why we should see the product design as the result of various disciplines that influence each other.
As pointed out earlier, designers are not the only ones that shall be attributed to the creation of new products. So, what exactly is product design then?
The definition on Wikipedia tells us that there is no one consensually accepted definition of product design that sufficiently reflects the topic’s breadth. Therefore, we need to consider two separate, but still interdependent, meanings of the term. One defines product design as an artifact and the other refers to product design as a process related to this artifact.
Even though the adjective “tangible” is not quite what responds to our digital world, we can agree that it covers the idea of creating a product. While going deeper to find a bit more exact definition, we can find out the following:
Or, quoting Interaction Design Foundation:
Although product design is considered a superior term to digital product design, it’s nowadays used as a synonym in the creative industry. It describes a process of designing and creating fully digital products such as apps and websites or products that have both digital and physical components, like electronic devices with complementary applications (IoT or Internet of Things).
One of the most popular design process models is, so called, Double Diamond. It consists of the following stages:
As hopefully proven above, designing in itself is crucial, but would lose most of its impact without all the other elements. The presented model is an idealistic and theoretical example of the design process, which can evolve depending on the situation. However, if you want to know our approach to the product design process, contact us - we will be happy to talk to you!
During the product design process, designers have to consider not only steps before and after the development stage where they give a product its form and function, but they also have to take into account other important factors:
At intent, we believe that product design is a process in which the essential ingredients are the holistic approach and close collaboration of all teams engaged in the project. Starting with a great plan and setting business goals fueled by in-depth research to prove our clients’ assumptions then followed by meticulous design with continuous testing and finishing with a reliable, state-of-the-art implementation. Last but not least, the most essential part of product design: creating effective and handcrafted solutions for real problems of the target audience.
At the same time, we don’t want to be pleased with the status quo and pick just one correct definition of the products we make. Although in love with digital, we aim to break the limit between material and digital and find new turning points and ideas at that very intersection.
Have we caught your interest? Check out how we cooperated on the design sprint with Goodify!
We can’t create a building only with the floor plans, can we? The same is true for the product design. It is a process that includes hard work of designers and experts in other disciplines as well.
The digital design industry is constantly evolving. Thus, new technological solutions and roles are appearing every day, which means no ultimate definition is given once and for all. As the ancient philosopher once said: panta rhei, everything flows. We’re sure this suits product design and designer definitions as well, but at least one thing should be stable in all of this: the holistic approach while creating a successful product that respects the end-user and brings value for the business.
Want to know more about UX? Check out our latest article about the advantages of UX research and in fact, when you should do it.
Let’s imagine you’ve had this fantastic idea for a product or service. You managed to bring this idea to life, you put a lot of effort into creating this concept, refining the visual layer and coming up with a marketing strategy.
Perhaps you were inspired by some already existing products, used best practices that you know of, maybe even mirrored the business model of some popular product that you like and admire. Seemingly, you must think, everything should go flawless. But for some reason, it turns out that despite the investments made, the product is not appreciated by the market and the business plummets.
What could be the reason for this? Well, there is a high probability that your assumptions about other people's needs and preferences do not correspond with the reality. That is why it is essential to precede product development activities with a UX research phase which focuses precisely on verifying assumptions, trying to understand users, identifying possible problems and then, based on that, formulating the solution that will be verified by the market’s end-users - your customers.
UX Research done well is a crucial part of the success of the product. Only by having a good understanding of the market and users‘ needs, we are able to deliver an adoptable product-market fit and also — what’s incredibly valuable today– product’s or service’s resistance to changes constantly happening on the market. Using a wide range of methods, we are able to properly select research techniques required to achieve desired goals, depending on our product’s maturity, or specific questions we want to get answers to.
Keep on reading to find out the advantages of UX research, selected examples of methods commonly used as well as the answers on how to use those methods in the product development process.
After that, I recommend you check my previous blog post about best practices for product design workshops with remote teams!
UX research is a specific product design & development phase focused on gathering functional requirements, researching target users along with their needs as well as constantly testing solutions on the target audience. The aim of such activities is to confront the concept and design work with the reality to verify whether the solution has a feasible chance of working in the real world scenario and for real users. In short, what we do in the course of research is defining what we create, for whom, in response to what problems and obstacles, why we do this particular solution and how - well before we spend significant sums of money developing something not yet validated, saving ourselves a lot of money and even more headaches in the process.
The importance of UX research is best put in bullet-points. Every product or service should consider it for the following reasons:
There are plenty of methods that we can use, depending on our needs, type of product or service and the stage it is at.
There are two types of methods depending on what kind of data we want to collect: qualitative and quantitative.
Quantitative — giving answers to “what” and “how many” questions
Methods:
Qualitative — providing answers to “why” and “how” questions
Methods:
You can distinguish many different models when it comes to the stages of product development. One of them, described by the Nielsen Norman Group, is the division of product related work into 4 stages:
Stages of UX Research
Source: Nielsen Norman Group
This stage consists of collecting information and deepening knowledge about users in order to better understand what their needs are. It is especially important when creating a new product to verify whether our idea has a chance of success, but also when developing new features or services.
Methods:
Once we have successfully examined end-users' needs, the next step is to define the problem we aim at solving and correlate those needs with the required design work.
Methods:
The testing phase takes place when we already have a prototype of the product as well as during development. Using appropriate methods, we make sure that the product fulfills its role and works as expected. At this stage we must make sure the designed solution is intuitive and understandable for users.
Methods:
During the entire product development phase, we collect information about users, their needs, changes in their behavior and emerging problems. This stage is an ongoing process and should never stop for the product or service to remain relevant. The methods that we can use include:
In short — investment in research always pays off and basically, regardless of the product and the industry, it is always worth doing. But let’s take a look at specific use-cases:
When creating a new product, it is crucial to precede it with as deep research as possible to identify the needs of the target users and make sure that the product we want to create will meet them. It is equally important to check the competition — what products are already on the market, what are their strengths and weaknesses, where we see the value that our product can provide and be better at than the products of others. In other words: what is its advantage over the competition.
For a product development team, working together with startups and companies on their product ideas, it is essential to understand both the client and end users at an early stage and — using UX research methods – make the whole team aligned on business goals and assumptions of the project.
The methods used to create a new product are the methods listed in Discover and Explore stages, including:
2. Adding new features
When working on a new functionality for an already working product, we can confidently use the same approach as when we use creating a new product . It is then equally important to research the market, understand users and define the problem we want to solve. However, at this stage, we can also use the knowledge provided to us by the existing product and feedback from its users, using methods such as data analysis (e.g. Google Analytics), surveys or interviews.
3. Redesign
When redesigning a product, we can learn from our current user base and their behavior for design decisions to be driven by data. This, however, doesn’t mean we shouldn’t use A/B tests or usability tests along the way, too.
4. Attracting new audiences
If we want to expand our product to a new group of users, we naturally need to get to know this group well and understand what their needs, problems, and habits are. Thanks to techniques like interviews, surveys, usability tests or creating user personas, we can make the research much more accurate.
5. Product’s end-to-end lifecycle
Research should be a permanent element of the work on a product during its whole lifecycle. Thanks to this mindset we will be able to constantly improve the product, react to new problems, and changes on the market. Having the right amount of data on our users, competition and the industry in which we operate, we will be able to react faster to such changes and improve the product accordingly or, if necessary, even pivot our whole business model.
A product created and developed based on UX research methods is able to achieve a product-market fit, which determines the degree of product adaptation to the market needs. Founders often tend to focus on the solution rather than the problem itself while creating new products. According to Michael Seibel from a renowned startup accelerator Y Combinator “only through launching, talking to customers, and iterating will you actually find a product that reaches product-market fit”.
Knowing the market and our users, we can make informed decisions both when creating and developing a product as well as when a change on the market forces us to pivot or change.
Usability has a direct impact on user engagement and conversions. What is more, making sure the product is usable for people with various types of limitations or disabilities, increases the overall user experience (it also benefits users with temporary restrictions, i.e. due to intense sunlight, using an app or a website in a hurry, in a noisy place, and so on).
Thanks to UX research methods such as user story mapping and prioritizing, we can agree in advance which functionalities are key and plan for further stages of product development accordingly, which translates into risk and cost reduction (mind you derisking is a recurring theme of UX research).
As mentioned above, knowing your customers and monitoring their behavior as well as the ever-changing market on an ongoing basis, we are able to respond to sudden changes (like the one we all have been experiencing in the recent months) much faster and make more informed decisions.
The UX research phase is often perceived as an additional cost and more time spent working on the product. Paradoxically, investing time and resources in UX research directly translates to big savings in the future, because it protects us from corrections and changes at further stages of the product development that are costly.
Without good research, it may turn out that after the development phase our product does not fulfill its purpose, does not deliver value or is unusable for the desired audience. And so, the hours saved on UX research can render into many days or weeks or even months of extra work on improvements and changes later on in the process with some of the previous work being thrown away. Not to mention how cost inefficient that mistake may be. Thankfully, potential spillage is totally avoidable by implementing UX research to one’s early-stage action plans and budgets.
It’s always important to do UX research and despite it being an additional work it always pays off. There’s plenty of methods you can use at different stages of your product lifecycle. Still, by using only a few, well-known and well-conducted methods, you will be able to collect precious data that will support your design decisions and translate it into a better end product that users should appreciate.
If you’re still unsure whether you should consider investing in UX before jumping straight to software development or if this investment noticeably pays off in the future - check out our Simple Guide to the ROI of UX.
If you do, however, understand the importance of UX across all product’s lifecycle stages, contact us for your project estimation.
As a developer, you may face the need to introduce a payment method in the application logic at some point. Business is business, after all, and even a few cents at the end make a difference, regardless of you being a startup, scaleup or a corporation.
When you work with a backend infrastructure there are plenty of different players in the payment and credit card processing as a service to work with, and today we will focus on one of the most interesting and flexible solutions we have had to deal with recently.
Sorry, we’ve already spoiled it in the title, but it’s obviously Stripe we’ll be tackling here.
One of the first checks that I like to do on an as-a-service solution is to figure out what is the language supported natively (and also if any third-party developer might have enhanced any of those already).
Support for Stripe is pretty wide. In the case I am going through in this article, I focused on Node.js to dig down a bit more and noticed that apart from the original library done by the original team, we have more than 640 other packages on NPM.
For sure it is something to think about, but mostly I see integration with the third-party framework, so nothing that makes me scared like “Oh, it’s so over-engineered that someone had to create a wrapper to make it more humane”.
Also, as you can see below, the library is being kept up to date. Since Stripe makes releases very often and timely, we can say that their average sprint time is around 1 week. Conclusively, the frequency of Stripe’s bug updates and fixes can be associated with the security of their library.
Having this in mind, all I need to do is to keep my package.json a bit more updated than usual, but I’d rather bother myself with periodic updates than regret and waste time for avoidable issues.
Last, but not least, even the attention with which Stripe treats PR seems quite tailor-made:
On top of that, Stripe’s willingness to accept PR and fixes even from externals is also worthwhile :+1:.
By the way, Stripe’s library supports typescript bindings too, so if you love force types - it’s a good one!
As an old school developer I love to have clear documentation. I’m from the pre-internet era when books were the source of truth and you had to rely on them and trust them in order to accomplish anything. In this respect, I pay a lot of attention to having a well-documented API.
And this is where I think Stripe is able to provide a very detailed API reference to dig and play with before any implementation, which is very helpful.
Digging around is quite easy to get familiar with their unique naming conventions and to prepare a high-level flow of the steps we will need to achieve our project’s business goals.
A dedicated section is linked to errors, usually pretty verbose and detailed.
Worth mentioning, the package for Node.js supports automatic pagination, so again something less to be worried about.
TLDR; if you, unlike me, hate documentation and want to jump into the center of it as fast as possible, there’s a very detailed quick start documentation too with some examples of basic payment flows. Worth taking a look into it!
As mentioned before, Stripe's huge advantage is a micro-feature API structure that allows you to manipulate flows in the most profitable way for your company.
Keep in mind the first rule of business that rightfully claims not all the businesses work in the same way, hence both customization and flexibility Stripe provides are the two most essential features.
I know that first hand. Recently I had to take care of implementing Stripe’s API in two different projects. One of them was more of a common eCommerce approach with payment as the last step of the user flow to finalize the whole order. Pretty standard, no?
The second one involved more custom approach with steps that included credit card verification or payment holding during the pre-transaction and background check.
To be honest, I was very worried about the second one at first. According to my previous experience with other suppliers, namely the nightmare of having to debug the outdated documentation and figuring out how to deal with the related bugs.
In comparison Stripe seemed like a salvation right off the bat! I managed to achieve the goal quite fast and the documentation was well-prepared. and supportive, using the verbose response and snippet examples for your language. Time saver!
As a developer myself, I suggest having at least a bit of development training first (this is not entirely a no-code solution after all). Apple did it very well with Playground tool which is a powerful instrument as a sandbox for Swift code, where you can experiment quickly and easily without having to create the whole project. It is a way to test some algorithms or functions in a lean and fast way without wasting time on creating the whole project. For Node.js you can still find similar options like JSFiddle. Make sure you use them before taking your VSCode and creating a dummy project to save yourself some time.
Don’t forget to register to Stripe website before you dive deeper into integrating its API with your website or app.
One thing I’m pretty sure of is two keys that will become your best friends at the beginning of your journey through the Stripe’s sandbox:
They can be found in Stripe Dashboard under the API keys section. Below you can see how they should look like.
Before you start any action, remember to switch to “Viewing test data”.
The orange colour of UI will confirm that you are sandboxed. By using this feature you will avoid a real financial transactions using any dummy card.
Okay, so we'll start our simulation with the most common flow for the eCommerce market, where we'll keep the user's payment details for future transactions. By the way, if you are interested in the eCommerce market, we recommend you to check out our article regarding the worst aspects of online shopping.
One additional aspect, assume we do not use any official or external credit card helper to collect the data. We will simulate that the data comes from a custom field.
But before you start writing your first line of code remember to check the logic of the whole process - always.
Once you have familiarized yourself with the above flow, you can go to the appropriate reference in the Stripe documentation. Now let’s get down to business.
We have already gone through Libraries and Documentation - it's time to create a project:
Once we do this we should run our favorite editor and take a deep dive into the project.
Create a file called index.js and integrate Stripe API. In order to do this you just have to run following the command in the directory of our project:
Let's create a boilerplate in the index.js to be able to use async/await approach:
const stripe = require('stripe')('REPLACE_WITH_YOUR_SECRET_KEY'); // helpers const createUser = async () => {}; const addCreditCard = async (user, card) => {}; const processPayment = async (user, card) => {}; (async () => { try { console.log('Stripe demo'); // TODO: create user const user = await createUser(); // TODO: add credit card to user const creditCard = await addCreditCard(user, { number: '4242424242424242', exp_month: 9, exp_year: 2021, cvc: '314', }); // TODO: create payment intent for user await processPayment(user, creditCard); } catch (e) { console.error(e); } })();
I added a few comments in the code and intentionally left some console.logs to speed up the debugging, but the point is to mimic the flow that we defined above: create a user, add a credit card and finally process the payment.
In case you wonder: 4242 4242 4242 4242 is a dummy credit card that is supported by Stripe.
Remember to replace the secret code with one of your accounts. Otherwise, it will not work and you may be disappointed.
Ok, at this point we should focus on creating a client by adapting the code like this one below.
const createUser = async () => { try { const customer = await stripe.customers.create({ email: 'jdoe@example.com', }); console.log(customer); return customer; } catch (error) { console.error(error); } };
As you can see, we are using the customers.create API, in particular, we are going very lean in this case using just the email, but we can provide additional data for the user, if needed.
Next step is to add the credit card. Below you can find how the snippet will look like.
const addCreditCard = async (user, card) => { try { const paymentMethod = await stripe.paymentMethods.create({ type: 'card', card, }); console.log(paymentMethod); const attached = await stripe.paymentMethods.attach(paymentMethod.id, { customer: user.id, }); console.log(attached); return paymentMethod; } catch (error) { console.error(error); } };
Now, here we have to combine two APIs under the same roof: paymentMethods.create and paymentMethods.attach.
We need to create a payment method with our credit card data (code, expiry, CVV, etc.) and associate it with the user we have created a moment ago.
Our last step is to put everything we’ve done so far together and process the payment:
const processPayment = async (user, card) => { try { const paymentIntent = await stripe.paymentIntents.create({ amount: 1250, customer: user.id, currency: 'usd', payment_method: card.id, }); console.log(paymentIntent); } catch (error) { console.error(error); } };
Our wingman here is the paymentIntents.create function, it requires just a few parameters passed as an object:
It shall be pretty straightforward from now on. You might, however, be wary of just one potentially tricky element…the amount.
Remember that Stripe API is expecting the amount as an integer number, so you will need to add a little helper to convert it, according to this logic:
$10,00 → 1000
$10,50 → 1050
$0,50 → 50
OK. so now it’s turn to run the code.
Let’s use:
And a little of JSON will start kicking in into our terminal.
You can also open Stripe Dashboard (remember to set the test mode!) and double check the payment and the user just created.
And we're done. It's time for you to start counting the incoming influx of cash from your eCommerce. Happy hunting!
If you want to see the whole code and play a bit with it - feel free! You can find it here.
P.S. Once you’re a millionaire, thanks to this, don’t forget to say thanks by letting us do some development work for you, will you?
In the times of high demand for developers in start-up and enterprise industries, coding skill is worth its weight in gold. The lack of access to this core competence becomes an obstacle for some companies, especially in the reality of the post-COVID-19 new norm that forces businesses both online and offline to reinvent themselves or at least pivot. At the same time, the inability to code turns out to be a business opportunity for others. It’s not a surprise to see a rising number of no-code applications that aim at supporting developers or even replace them. But why use this solution only to create an end-to-end product and not get the prototyping phase to a higher level? In this article, we will answer the question if and how a designer can make the use of no-code applications in the design process.
Low-code and no-code platforms appeared as a response to a lack of skilled developers on the market and an increasing need to solve problems quickly during the development process.
They are derived from Rapid Application Development (RAD) tools such as well-known Excel, Microsoft Visual Studio, or Microsoft Access, which give a non-IT-professional user a range of features that touch on coding.
While the mentioned examples required at least some technical knowledge, low/no-code platforms go a step further. Although the line between low- and no-code is still vague and in low-code platforms, a user needs to understand some level of coding, the main goal of this solution is clear: limit the development activity (or developers’ involvement) and speed up the process by using the graphical user interface (GUI).
Potential users of such platforms, like developers, UX/UI designers, or graphic designers, can skip writing lines of code and use visual tools in the form of drag-and-drop components to quickly build a website or application. As a result, to create a real digital product, one doesn’t even need to know how to code (hence the no-code term). However, no-code has its obvious limitations that raise questions about how effective these kinds of tools are, what their limitations are, and - finally - what are the instances when one simply can’t go about designing without the help of an experienced developer.
There are currently a bunch of no-code tools at designers’ disposal that enable them to painlessly build digital products that are not just prototypes but have actual code so that they can be immediately implemented. One can find an enormous list of platforms that compete with one another in offering more and more advanced solutions, from creating a newsletter (Mailchimp), a website (Webflow, Wix) to building an online shop (Shopify) or even complex web apps (Bubble).
The ease of accessing these platforms can vary. Some of them require additional applications and/or direct contact with the company about a potential collaboration. Nevertheless, many of them don’t require a start-up budget or big investment and can be used for free by a freelancer or a small business.
So, why would a designer want to use no-code platforms? There are a few things that pose a challenge during the design process and user testing stage.
No-code platforms, as has been hopefully proven above, could easily solve big challenges of the product design process. But one shouldn’t get too much hope. These tools are still in their infancy and it will take much more time to even discuss replacing developers if that’s ever going to be possible in the first place. But it doesn’t mean that these tools cannot be useful for the designers right now, to make the process more efficient and faster or to bring new prototyping possibilities to reality in an organization.
To analyze the usefulness of no-code platforms from a designer’s point of view, we gave a closer look at the following tools:
Bubble positions itself as a place where one can prototype, launch, iterate, and scale their web application. As a designer, I was interested in the “prototype” option, which they describe as “Demonstrate your idea before making an investment in technical resources”.
After a quick sign up the user is taken to onboarding, which explains the app building process step by step. This is indeed very helpful, as the interface is complex and doesn’t seem intuitive at first. The onboarding on the other hand is simple and even though it’s a bit long, it’s well justified, as it helps to really understand what is going on (and why). The whole onboarding consists of 12 lessons that are skippable. Some of them also include a “hard mode” at the completion of a given lesson. With each lesson, the user is taken deeper and deeper into the complexity and possibilities of the product.
When users decide they are ready to build their own web app, they can choose from many types of digital products, such as marketplace, online store, CRM, and many more. This acts as a template for the design of the app in Bubble and is helpful if one doesn’t want to start from scratch.
When building an app, the user is also guided through the interface and all the possible actions.
There’s a ton of elements to choose from, such as text, buttons, video, map, form fields (e.g. file upload, sliders). Users can style every element separately or apply a predefined theme, which immediately makes the app look much better. They can also add simple or more complex interactions - for example when a user searches for an address, it’s displayed on the map and saved in the database.
After testing the app for a while I can confidently say that it can act as a great and efficient prototyping tool. As one can choose from many interface elements and apply interactions almost automatically, it can greatly improve the prototyping process. I wouldn’t recommend creating a UI Design in Bubble, though, as the styling options are a bit limited. The process is also different from how most UI designers work, where they build their own design system, style single elements, and group them into bigger components.
To summarize: as long as one is not using a predefined design system and wants to quickly prototype an app with complex interactions - they can definitely use this tool. The problem arises when one needs to then translate the prototype into visual design, which means the necessity to start off in the design tool from scratch, as every element is in Bubble instead of e.g. Figma or Sketch. This, however, does not resolve our challenge about the inefficiency of the process but does resolve the challenge of limited interactions. It’s also faster to create a prototype from provided elements, and not having to draw them yourself.
I’ve also tested a few other apps to compare the experience. I tried Honeycode from Amazon, but the app has limited options and it’s unintuitive, which makes it hard to use by someone without technical knowledge about the AWS platform. I would even go as far as to claim that for designers it’s rather useless.
Webflow, on the other hand, had great user experience, with simple and helpful onboarding. It definitely makes building websites much faster and fun, and I dare to say that one can ditch visual designer altogether if the desired website is simple and doesn’t have to look very polished. I would definitely use it to do some quick usability testing of the website, and the work would be much faster than building components from scratch in Figma.
I also tested Appian, but I got discouraged by the platform’s unclear logic and complex interface, which makes it difficult to understand the tool and use it effectively and AppGyver seems to have some potential in the future but is more of a work in progress at this point.
Even though all platforms that I’ve tried are so-called no- or low-code tools, some of them require zero technical knowledge, and some of them require some or more. Even if the app seems easy to use, the onboarding really helps to understand all possibilities of the tool which might be missed during free exploration. What’s significant from the designer’s perspective is that these platforms are never a tool in which you can build everything in - from mobile and web apps to websites. But if one wants to quickly prototype and validate his/her ideas, I’d say that those tools are definitely worth trying. It will be much faster and easier to use no-code platforms than to build everything from scratch in a prototyping tool since on those platforms one can use predefined and working elements (e.g. sliders, file uploaders, maps) that otherwise take time to draw and to connect interactions with one another.
Pros:
Cons:
So, will I use it in the future? Maybe. If so, I will definitely stick to Bubble and Webflow, as they were the most intuitive and helpful in the design and testing process.
HQ Trivia needs no introduction. It has had its ups and downs but one cannot deny the fact that when released, it was a completely new type of experience on our mobile devices and it took the market by storm notching 10 million downloads on Android and iOS devices.
HQ Trivia has been all about questions and answers. And the question you may be asking yourself right now is how it all looks like under the hood?
There are a lot of moving parts inside the app’s engine, but one that plays the most crucial role is the producer panel that controls all sorts of stuff happening during the game.
It all starts with the creation of the new game, selecting what type of game it should be, and scheduling when should it happen.
The next step is, of course, adding a set of questions for which you can select a category, add possible answers, and then select which one of them is the correct answer. But that’s just the beginning.
Some questions, as you may know, have videos, photos, or audio files attached to them, and this is the place where the producer can add all of those things to the questions.
Then there is also a matter of not all questions being created equal from a reward standpoint.
After some questions, there are checkpoints that allow players to finish the game and get some bonuses or continue if they’re feeling lucky. This is also something that will be set during the process of creating questions.
The producer can also specify how many coins a specific checkpoint offers to the players. He basically designs the whole in-game reward/gamification system as he plunges in more questions for the day ahead.
Another feature that is a part of the process of game creation is adding gift drops. They won’t be appearing in every game, but have to be prepared and attached to specific questions in advance.
As you can see there is a lot happening before the game even begins, but what happens when questions start to pop up on your screen. How do they even get there? Are they being sent to devices in advance? Or maybe there’s a way to take a look at them before they are asked?
It’s not that simple. It turns out the only safe haven for trivia questions to queue before they appear on players’ screens is the backend.
HQ knows this very well and that’s why they send each question only when the time is right, which is when the question is being asked. The way it’s being done is through socket connections which keep each player connected to the server for the entire duration of the game.
Let’s have a quick look at what sockets are and how they differ from REST APIs, which are used by most apps on our phones.
REST APIs are interfaces that allow us to get specific data from the server. It may be weather, recipes, or photos on Instagram, but the way the communication between devices works is the same: the app asks the server for some data, the server then answers by sending the requested data and the connection is closed.
You can read more about REST API in our article on integrating Google Fit with an Android app.
Sockets on the other hand keep the connection between the app and the server alive for as long as it’s needed, and this allows messages to go back and forth with no need to re-establish the connection.
Those messages, like in HQ Trivia’s example, may contain different kinds of data. For instance, when the presenter is ready to ask the new question, the producer simply clicks the “Start question” button which causes the server to send a message containing said question to all of the players, who then see the question appearing on their screens. Such a message doesn't contain the answer to the question, in case you were wondering, which makes this type of connection the safest for HQ Trivia's purpose.
This is how it works in practice:
1. 10 seconds after a question pops up on the screen, all the players are given the answer.
2. The results are being sent in the exact same way as questions, using a socket connection.
3. The producer selects the option to send the question summary which triggers the server to send each participating player a message with data such as:
4. In the event of a player being wrong, the message will also contain the info whether the player should be saved by an extra life or a free pass or if he or she was eliminated.
I hope you enjoyed this behind the scenes of how HQ Trivia creates its quizzes. And hopefully, learning about it will help you come up with your idea for the next big thing.
If that’s the case, then we - at intent - will be happy to help you bring the product to life, using our extensive knowledge and experience. Don’t hesitate to contact us.
Data is king and possessing it gives you power. But there’s no way that you can use it to rule your industry if you don’t turn this data into actionable insights. In order to make sense of the immense amount of digital information that your business generates these days, you need powerful business intelligence software. And one of the best options available, no matter what kind of data you’re dealing with, is Power BI. So let’s take a closer look at what this option has in store for you.
Power BI is a Microsoft flagship business intelligence product – it is a versatile platform with a number of tools for processing massive amounts of data, so you’re not only able to collect and aggregate all of your data, but you can also visualize it graphically, analyze the outcomes and share your insights with others. The platform consists of both local and cloud-based apps and services, and connects to a wide variety of Microsoft and non-Microsoft based data sources and business tools.
And because Power BI has such an intuitive interface and easy-to-use features that ensure a pretty low barrier to entry, even non-technical business users can use it. Now, let’s keep going and see how you can benefit from this solution.
Plus, Power BI is pretty cost-effective, with two different pricing plans to choose from – Pro and Premium. You can even try the first one for free and then upgrade if you feel like you need the more advanced edition.
There are a number of ways that businesses can leverage Power BI, no matter the size of the company. For example, it can be used to:
Of course, since Power BI can be integrated with many different services and apps, and has a great variety of internal features, the sky’s the limit in terms of its applications. And speaking of features… there are at least 10 of them that deserve special attention.
All of these features – and many more options that are available within the platform – help you create useful and actionable visualizations. These dashboards can be used for different purposes while allowing you to maintain full control over your organization, department, or project. The examples listed below speak for themselves.
Power BI is extremely versatile and can be used in a wide variety of situations. And while it can have a steep learning curve (after a pretty easy start) – it also allows you to create reports or dashboards that are elegant, clear, and easy to understand from the perspective of a decision-maker. And this is absolutely priceless since the ability to make adequate and timely decisions based on undeniable facts can quickly turn into real profits for a company and its customers or the society, depending on the kind of data you analyze and for whom.
Cover image by Stephen Dawson
With the influx of health-conscious customers, ensuring that your fitness product delivers a holistic, integrated service is growing ever more important.
Every creator of a health-oriented application is practically obliged to connect it to a fitness service to facilitate the seamless transfer of data between various (often competing) applications. In the Android world, that solution is the Google Fit API.
As is standard for Google’s server APIs, there are two ways to communicate with the server:
These solutions do not share a code base, so for example, slightly different fitness activity (as in “rowing” or “kayaking”, not MainActivity) types are available:
Google Fit activity types list
Google APIs for Android documentation list
(Note the lack of “Guided breathing” activity type for SDK.)
The choice between the two mainly depends on the level of technical sophistication of the SDK - if it’s advanced enough, well maintained, and documented, it is almost always more convenient to use than raw REST endpoints.
However, some optimization techniques, especially in using cached data, might mean that the data provided by the SDK is less reliable than acquired directly. At least until recently, this was the case with the Fit APIs (plural), with its Local Storage (described below), so that is something worth testing before deciding on an approach.
In order to improve performance and reduce server load, most API calls do not connect to the Google Fit server, using the local store for a reading and writing data instead. Unless explicitly demanded, data propagation happens only every few hours. This can lead to confusion if, for example, you expect the changed data to be immediately available on another device. On the other hand, this does make the data accessible to any local Fit-integrated applications even without a network.
This is particularly important to keep in mind when testing the application since with data sync not happening immediately, false negatives can be reported.
On the other hand, all local changes will be available immediately, even if the device doesn’t have a network connection at the moment. This is a great solution performance-wise, but quite confusing (and currently under-documented) during testing. With that, of great help will be the Playground:
One feature of Google APIs I personally found particularly useful was the possibility of seamless online testing of its REST APIs. Not just for Fit - the list is astonishingly long and only keeps growing. With real data underneath, the Playground allows for quick verification of the stored data, before writing a single line of code.
As is often the case, an application might need to separate the Fit activities created by it from the remaining activities. There are two approaches to achieve that:
The first one is straightforward - simply call setAppPackageName when creating an activity, and verify that value when checking the downloaded activity’s type. The obvious downside is that this value will be identical for all activities an app creates, so if a more sophisticated separation is required, the package name alone is not enough. That’s when the naming pattern comes in - set the session identifier according to the chosen pattern and then verify, like below:
private fun activityIsCustomActivity(session: Session): Boolean { return session.activity == FitnessActivities.KITESURFING && CUSTOM_SESSION_KEY_REGEX.matches(session.identifier) }
In times when user’s privacy is an ever-growing concern, it is increasingly important to guarantee that an application will know about the user only as much (or as little) as it is required for it to function. It is entirely feasible for a user to wish to allow an app’s access to his basic biological data, but withhold some more sensitive data such as heart rate. In response, Fit allows very detailed compartmentalization of the accessed data.
fun syncOptions(): GoogleSignInOptionsExtension = FitnessOptions.builder() .addDataType(DataType.TYPE_ACTIVITY_SEGMENT, FitnessOptions.ACCESS_READ) .addDataType(DataType.TYPE_ACTIVITY_SEGMENT, FitnessOptions.ACCESS_WRITE) .addDataType(DataType.TYPE_HEART_RATE_BPM, FitnessOptions.ACCESS_READ) .addDataType(DataType.TYPE_HEART_RATE_BPM, FitnessOptions.ACCESS_WRITE) .addDataType(DataType.TYPE_HEIGHT, FitnessOptions.ACCESS_READ) .addDataType(DataType.TYPE_HEIGHT, FitnessOptions.ACCESS_WRITE) .addDataType(DataType.TYPE_WEIGHT, FitnessOptions.ACCESS_READ) .addDataType(DataType.TYPE_WEIGHT, FitnessOptions.ACCESS_WRITE) .build()
For a simple example, let’s read the user’s height data from the server. First, we need a working Google Account object:
val signInAccount = GoogleSignIn.getLastSignedInAccount(context)
If the signInAccount value is not null, the app is authenticated, but not yet authorized. We need to request the permission to read/write data of the required data types.
fun syncOptions(): GoogleSignInOptionsExtension = FitnessOptions.builder() .addDataType(DataType.TYPE_HEIGHT, FitnessOptions.ACCESS_READ) .build() val requestCode = 123 GoogleSignIn.requestPermissions(activity, requestCode, googleSignInAccount, syncOptions())
The SDK will display a permissions dialog to the user. The result of the user’s action will be returned in a standard onActivityResult callback. To use the SDK later, we only need to check if the permissions are still granted (as in they have not been revoked)
val hasSyncPermission = GoogleSignIn.hasPermissions(signInAccount, syncOptions())
If permitted, it is finally time to read the data from the server. We only need the latest height data, so the limit will be set to 1. We want to make sure that we get data from the server and not just from the local storage, so we set the “enableServerQueries” flag. And of course with the full-time limit, to get even the oldest data.
fun readPersonalProperty( signInAccount: GoogleSignInAccount, dataType: DataType): Task<DataReadResponse> = Fitness.getHistoryClient(context, signInAccount) .readData(DataReadRequest.Builder() .read(dataType) .setTimeRange(1L, Instant.now().epochSecond, TimeUnit.SECONDS) .setLimit(1) .enableServerQueries() .build()) readPersonalProperty(signInAccount, DataType.TYPE_HEIGHT)
Of all the APIs that compose the full fitness solution, Sensors Api is of particular importance. For all health applications that do not have a backing hardware device (like a bracelet or ring), the smartphone will be the source of all new data.
In general, sensors on Android devices belong to three categories:
The availability of sensor data depends on the presence of sensors of different types inside the device, hence the need to verify that in your application, before querying for the data. Therefore, a fallback mechanism needs to be implemented to provide a satisfying user experience even if most sensors are absent or malfunctioning.
Most of these sensors can have varying accuracy, of which the application will be notified by the onAccuracyChanged callback - particularly important for uses that require fine-grained data.
Photo by Ketut Subiyanto from Pexels
“All the world's a stage, And all the men and women merely players; They have their exits and their entrances, And one man in his time plays many parts” - William Shakespeare (and Civilization IV)A successful project requires many people, filling different roles. Let’s assume that the design is ready, you know what to do, but you want to do it better. I believe that with the help of role-playing and some creative gamification you and your fellow inmates actually can do a better job. Roles such as:
“Ok, but my kid is just 3 years old! He can’t read or write yet, has an attention span of about 10 seconds on any given task, and prefers to just monkey around. What then?” - you say.Well, funny you should ask since he seems to be the perfect person for the role of:
However, more often than not, it just isn’t that simple and you’ll see why, below.ROI = net profit of investment / cost of investment * 100% (net profit = gain of investment – cost of investment)
Arduino has been around for years, and you've probably heard the term. It is an open-source electronics platform that allows developers to connect hardware with software easily. The popularity of Arduino is vast because it offers a simple and accessible user experience. As an iOS developer, I wanted to play around with this platform, so I've built a toy car that can be controlled with a mobile app. Read this article to find out how I did it.
There are two key components in this project:
You can see the final result in the picture below. On the left side is an iOS application, on the right side, a toy car.
Let’s start with the car controller (iOS application). In this article, I won't go into details about iOS application implementation. I will describe briefly how it works from the user perspective.
The application connects to Arduino using BLE (Bluetooth Low Energy).
After launching it, the device discovery screen is displayed. The application starts searching for Bluetooth devices automatically and displays devices with matching service UUID. User can restart device discovery by pressing the Search button (in case of network errors). The application can automatically connect to the last selected device by checking the Auto-connect checkbox.
The controller screen has four controls:
It shows connection status in the top right corner. By pressing the back button, the application disconnects from the car and navigates back to the device discovery screen.
The toy car was built using mechanical parts from an old toy: chassis with wheels and 2 DC motors (one for driving, the other one for turning).
The custom components are Arduino, DC motor controller, Bluetooth, LEDs.
1. Arduino Due
Arduino is an open-source electronics platform based on easy-to-use hardware and software. It can be used for prototyping hardware devices. Implementation is very fast and in a few days, we can have a working device for real-world testing.
It is responsible for controlling hardware components (motors, sensors, LEDs), Bluetooth communication, and handling iOS application commands.
Arduino Due main characteristics:
2. Waveshare Motor Control Shield L293D
It is capable of driving 4 DC motors or 2 stepper motors at one time.
When using an external power supply of 9V it allows you to adjust the speed and direction of the motors with current consumption up to 600mA (1.2A peak) and a voltage between 1.25V - 6.45V.
Toy car has two DC motors: the first one is used for driving, the second one for turning.
3. Bluetooth module HM-10
HM-10 is a Bluetooth 4.0 module. It works with voltage from 3.3V to 5V, it communicates over a serial UART interface (RX, TX pins). The maximum transmitter power is +6 dBm, the receiver sensitivity is -23 dBm.
4. Two DC motors
DC motor for driving (operation Voltage of 3-6V, free-run current of 200mA).
Mini DC motor for turning (operation Voltage of 3-6V, free-run current of 30mA).
Choose your DC motors depending on the weight of the car.
5. LED lights
4 x red LEDs for turn signals
4 x white LEDs for headlights
6. Other components
The photo below shows how components are connected. It doesn’t look pretty, but I assure you that it works :)
And here is a schematic diagram:
Let’s examine each and every component individually.
One of the biggest advantages of Arduino is that there is a vast set of ready to use components and libraries available. They are very easy to use and don’t require us to write much code. They are named Shields. In this project, I used a Bluetooth Module and a DC Motor Control Shield.
Let’s start with the Bluetooth module first. It communicates with the Arduino board using the Serial port.
Connect Bluetooth module HM-10 pins to Arduino board:
Define a helper macro named HC06 that points to a Serial3. configureBle() method starts Bluetooth connection.
#define HC06 Serial3 void configureBle() { HC06.setTimeout(100); HC06.begin(9600); }
Define Bluetooth commands for controlling the toy car. iOS application sends commands to the Arduino. Currently, the application supports the following commands: drive, turn on headlights, change gear, turn.
Command length is 3 bytes.
First byte is a command code defined as an enum BleApiCommand.
Second byte is an additional parameter, depending on the command code:
Third byte is a command terminator 0x0f.
const int commandTerminator = 0x0f; const int commandLength = 3; enum BleApiCommand { cmdDrive = 0x23, cmdHeadlights = 0x24, cmdGear = 0x25, cmdTurn = 0x40 };
Main application setup method.
void setup() { configureLightsPins(); configureMotorPins(); configureBle(); }
Main application loop responsibilities:
void loop() { int tickTime = millis(); // Command loop while(HC06.available()) { byte buffer[commandLength]; int size = HC06.readBytes(buffer, commandLength); if (size != commandLength) { continue; } lastCommandTimestamp = tickTime; handleCommand(buffer); } handleTurn(); turnSignalControllerTick(tickTime); stopIfDisconnected(tickTime); }
Handling commands
Extract command code and parameter, then handle each command in a separate method.
void handleCommand(byte data[]) { byte code = data[0]; byte param = data[1]; byte terminator = data[2]; if (terminator != commandTerminator) { return; } switch (code) { case cmdHeadlights: { bool on = param == 0x02; setHeadlights(on); break; } case cmdGear: { setGear(param); break; } case cmdTurn: { turnWheels(param); break; } case cmdDrive: { drive(param); break; } } }
Headlights (white LEDs) can be turned on and off manually by the user using a button in the top-right corner of an iOS application.
Connect each LED to a separate digital PIN through a resistor (220Ω) as shown in the diagram above.
Declare 4 headlights and 4 turn signals pins as constants and set their mode to OUTPUT.
const int pinHeadlightsRight1 = 22; // Remaining headlights pin numbers: 24, 26, 28 const int pinTurnSignalLeftFront = 31; // Remaining turn signals pin numbers: 33, 35, 37 void configureLightsPins() { pinMode(pinHeadlightsRight1, OUTPUT); pinMode(pinTurnSignalLeftFront, OUTPUT); ... } void setHeadlights(bool on) { int value = on ? HIGH : LOW; digitalWrite(pinHeadlightsRight1, value); digitalWrite(pinHeadlightsRight2, value); digitalWrite(pinHeadlightsLeft1, value); digitalWrite(pinHeadlightsLeft2, value); }
Turn signals (red LEDs) work automatically - they blink every 0.5 seconds when the car is turning. They work in four modes: disabled, blink left, blink right, blink both sides.
Define mode as an enum and tick interval as constant.
Use turnSignalControllerSetMode() method to change the blinking mode.
enum Modes { idle = 0, blinkLeft = 1, blinkRight = 2, blinkBoth = 3 }; const int tickInterval = 500; // milliseconds int lastTickTimestamp = 0; int mode = idle; bool ledOn = false; void turnSignalControllerSetMode(int newMode) { if (newMode == mode) { return; } mode = (Modes)newMode; lastTickTimestamp = 0; updateState(); }
turnSignalControllerTick() method is called from the main application loop. lastTickTimestamp stores the last timestamp (number of milliseconds passed since the Arduino board began running the current program) and compares it with the current timestamp. The result is a timer with a 500ms interval.
void turnSignalControllerTick(int currentTimestamp) { if (mode == idle) { return; } if ((currentTimestamp - lastTickTimestamp) > tickInterval) { lastTickTimestamp = currentTimestamp; updateState(); } }
Method updateState() turns LEDs on and off depending on the current mode.
void updateState() { if (mode == idle) { ledOn = false; } else { ledOn = !ledOn; } int value = ledOn ? HIGH : LOW; switch (mode) { case idle: case blinkBoth: digitalWrite(pinTurnSignalLeftFront, value); digitalWrite(pinTurnSignalLeftRear, value); digitalWrite(pinTurnSignalRightFront, value); digitalWrite(pinTurnSignalRightRear, value); break; case blinkLeft: digitalWrite(pinTurnSignalLeftFront, value); digitalWrite(pinTurnSignalLeftRear, value); digitalWrite(pinTurnSignalRightFront, LOW); digitalWrite(pinTurnSignalRightRear, LOW); break; case blinkRight: digitalWrite(pinTurnSignalLeftFront, LOW); digitalWrite(pinTurnSignalLeftRear, LOW); digitalWrite(pinTurnSignalRightFront, value); digitalWrite(pinTurnSignalRightRear, value); break; }
Connect DC motors to the Motor Control Shield L293D:
const int pinMotorDrive_dir1 = 8; const int pinMotorDrive_dir2 = 7; const int pinMotorDriveSpeed_pwm = 10; const int pinMotorTurn_dir1 = 12; const int pinMotorTurn_dir2 = 13; const int pinMotorTurnSpeed_pwm = 11; void configureMotorPins() { pinMode(pinMotorDrive_dir1, OUTPUT); pinMode(pinMotorDrive_dir2, OUTPUT); pinMode(pinMotorDriveSpeed_pwm, OUTPUT); pinMode(pinMotorTurn_dir1, OUTPUT); pinMode(pinMotorTurn_dir2, OUTPUT); pinMode(pinMotorTurnSpeed_pwm, OUTPUT); digitalWrite(pinMotorDrive_dir1, 1); // set forward direction digitalWrite(pinMotorDrive_dir2, 0); // set forward direction digitalWrite(pinMotorDriveSpeed_pwm, HIGH); // set to high to enable the L293 driver chip analogWrite(pinMotorDriveSpeed_pwm, 0); // set speed to 0 digitalWrite(pinMotorTurn_dir1, 1); // set forward direction digitalWrite(pinMotorTurn_dir2, 0); // set forward direction digitalWrite(pinMotorTurnSpeed_pwm, HIGH); // set to high to enable the L293 driver chip analogWrite(pinMotorTurnSpeed_pwm, 0); // set speed to 0 }
Speed parameter (0-127 - drive backwards, 127 - stop, 128-255 - drive forward) must be converted into a speed (0-255) and direction. Choose the minimum speed/voltage depending on the car weight and DC motor parameters.
void drive(byte value) { int minSpeedValue = 100; int maxSpeedValue = 255; if (value < 127) { // backwards digitalWrite(pinMotorDrive_dir1, 0); digitalWrite(pinMotorDrive_dir2, 1); int speedValue = minSpeedValue + (maxSpeedValue - minSpeedValue) * (127 - value) / 127; analogWrite(pinMotorDriveSpeed_pwm, speedValue); } else if (value > 127) { // forward digitalWrite(pinMotorDrive_dir1, 1); digitalWrite(pinMotorDrive_dir2, 0); int speedValue = minSpeedValue + (maxSpeedValue - minSpeedValue) * (value - 127) / 127; analogWrite(pinMotorDriveSpeed_pwm, speedValue); } else { // 127 stop digitalWrite(pinMotorDrive_dir1, 1); digitalWrite(pinMotorDrive_dir2, 0); analogWrite(pinMotorDriveSpeed_pwm, 0); } }
Connect potentiometer pins to the board:
const int pinPotentiometer = A0; int readPotentiometer() { int sensorValue = analogRead(pinPotentiometer); return sensorValue; }
Convert target angle received from an iPhone (0-255) to integer (0-1024) in order to compare it with the potentiometer reading.
void turnWheels(byte value) { int minLeft = 300; // it means that the wheels are turned all the way to the left int maxRight = 900; // wheels turned all the way to the right int center = 600; // wheels are straight if (value < 127) { // turn left turnSignalControllerSetMode(1); float factor = ((float)value)/127.0; targetTurnValue = minLeft + factor * (center - minLeft); } else if (value == 127) { // straight turnSignalControllerSetMode(0); targetTurnValue = center; } else if (value > 127) { // turn right turnSignalControllerSetMode(2); float factor = ((float)value - 127.0) / 128.0; targetTurnValue = center + 1 + factor * (maxRight - center - 1); } }
Method handleTurn() does a few things:
Choose the speed/voltage depending on the DC motor parameters.
void handleTurn() { int sensorValue = readPotentiometer(); const int tolerance = 20; const int speed = 130; if (abs(sensorValue - targetTurnValue) < tolerance) { // value reached, stop motor analogWrite(pinMotorTurnSpeed_pwm, 0); } else if (sensorValue > targetTurnValue) { // turn left digitalWrite(pinMotorTurn_dir1, 0); digitalWrite(pinMotorTurn_dir2, 1); analogWrite(pinMotorTurnSpeed_pwm, speed); } else if (sensorValue < targetTurnValue) { // turn right digitalWrite(pinMotorTurn_dir1, 1); digitalWrite(pinMotorTurn_dir2, 0); analogWrite(pinMotorTurnSpeed_pwm, speed); } }
That’s pretty much everything. I hope you enjoyed this article and you will build your toy car!
All the signs in the sky tell us that Machine Learning and Artificial Intelligence will be as important of a revolution as web browsers or smartphones were back in the day. Unfortunately, this technology has a relatively steep learning curve, which makes its adoption much slower. Very few companies can afford a dedicated data scientist on the payroll. If they do, often, the quality or the amount of data becomes the problem, and just won’t be enough to train your models well.
Fortunately, there are companies on the market that provide very well-trained models and easy to integrate Machine Learning APIs for everyday use cases that you can incorporate into your project and thus take advantage of cutting-edge technology without making the leap of faith. In this blog post, I’d like to point your attention to some of those use cases, which can be the low-hanging fruit you can pick right up.
When thinking about AI and machine learning, a problem of recognizing patterns in an image usually comes first to our mind. Facial recognition, pattern recognition, QR or barcode read-out, detecting inappropriate images before they get published, matching face to a person and many many more use cases fall into this category. As with everything, when a problem is fairly common and there are many solutions to tackle it out there. Let’s first have a look at what device vendors offer us in their platform SDKs.
Apple’s CoreML and CreateML
It turns out both Apple and Google have been offering quite a set of APIs we can take advantage of to perform image recognition directly on a device. In the case of iOS devices, we got CoreML framework with several pre-trained models (for example for face detection or optical character recognition) along with a very easy to use software for training your own models called CreateML. One of the key changes in the third version of this framework is the ability to re-train your models on-device, so your apps can get smarter as users use them, without breaching their privacy because no data will leave their handset or tablet.
Google’s MLKit
Of course, Google has a counterpart to CoreML called MLKit. Things get very interesting with Google’s offering because they made MLKit very closely integrated with Firebase (Google’s Backend as a Service platform) therefore the framework is available for both iOS and Android. What’s more, it can work in two scenarios - on-device and powered by the cloud, both of which have their pros and cons. Integrating MLKit is just a few lines of code and it’ll let you tackle a number of image recognition related problems for instance: detect faces, read barcodes, detect & track objects, recognize and label objects in an image and more. Obviously you’ll have to keep in mind the fact that your performance on the device will be much faster than when sending heavy imagery over to Google’s infrastructure, but you’ll be limited to only light and limited models. A good example may be OCR - on the device, you will only be able to detect alphanumeric characters.
Google’s Cloud Vision
If for whatever reason you don’t want to do the work on the client-side, you’re not out of luck, as there’s plenty of APIs you can take advantage of and move heavy lifting to the cloud. One of my favorite ones is Google’s Cloud Vision. As with any cloud services these APIs, unfortunately, don’t come free, however, usually, you’ll get a few free credits to start and experiment with.
Cloud Vision will let you do as much as previous examples of on-device frameworks and more, after all, you’ll be using Google’s state of the art infrastructure and extremely well-trained models on loads of data. One of the best examples of the quality results I often show people is the case of New York Times digitizing their entire photo library with Google Cloud. What’s really interesting in this example is how they took the result of the OCR process and piped it into another API for NLP (natural language processing) to understand things written on the back of the photos. Similar APIs are offered by Amazon under their Rekognition service, Microsoft under Azure Cloud and IBM on their Watson platform.
Another very common ML application is Natural Language Processing and everything that’s related to pulling insights from unstructured blocks of text. Overall it’s an extremely tough nut to crack because of the multitude of languages, dialects, wording styles, etc. Thankfully there are APIs that can help you tackle this problem, the bad news is you’ll be pretty much exclusively limited to the cloud - the models to address this are way too heavy for doing it on the device.
Looking at providers, again we have the usual suspects: Amazon Comprehend, Google Natural Language, and IBM Watson Natural Language Understanding. They all will obviously give you very different results, so before you start integrating any of these services make sure you thoroughly test each and every one of them. Ideally, you’ll want to structure your code so that it’ll be relatively easy to swap services underneath because all of them change and you may start with one, but decide you’ll want to change it later in the project lifecycle.
What kind of results can you expect after applying these models to your text? First of all, you’ll get a list of classified entities. You’ll also be provided with detailed sentiment analysis of the entire text, structural sentence breakdown, some high-level categories visible in the text, and much more. Following up on the example from the previous paragraph, once folks from NYT recognized all the labels on the back of the millions of photos they attempted to digitize, they feed these results into Google’s NLP service and were able to pick out important details of the photos like where it was taken, when, what’s on the photo and additional metadata from the back.
Making conclusions based on user behavior in the app is just a perfect case for applying heavy ML. So far in this scenario we’re pretty much limited to only one vendor: Google Firebase, however, it’s well worth giving it a go.
By using several products from under Firebase umbrella (RemoteConfig, A/B Testing, Notifications, and Predictions) you can target users who are (according to Google’s models) likely to churn and shower them with promotions via push notifications convincing them to stick around. Or do just the opposite - you can segment out users who are likely to make a purchase.
Because the system learns as people use your digital product you can create custom predictions based around your own events created in the Firebase analytics and then pick out users who are likely to hit them. Finally, thanks to Firebase’s integration with BigQuery you can export both events from your Analytics as well as Predictions and crunch them directly in BigQuery for further analysis.
Best of all, Firebase Predictions are free of charge on all Firebase plans, so the barrier to entry is almost non-existent.
As you can see there are plenty of platforms offering various APIs and models for a lot of different scenarios. And we’ve only covered a few relatively common cases. I highly encourage you to dip your toes into the services I mentioned, the time investment is minimal and you can try out a number of APIs that can bring a lot of value to your product and your users. Also, it’s a great way to get started with machine learning and eventually level up into building your own models and APIs, from there onwards sky is a limit!
Cover Photo by Hunter Harritt
liveDataVar .distinctUntilChanged() .map { /* do the RX magic */} .nonNull() .take(3)
RxBroadcastReceivers .fromIntentFilter( context, IntentFilter(LocationManager.MODE_CHANGED_ACTION)) .subscribe { // react to broadcast }
disposable = url.download() .observeOn(AndroidSchedulers.mainThread()) .subscribeBy( onNext = { progress -> //download progress button.text = "${progress.downloadSizeStr()}/${progress.totalSizeStr()}" button.setProgress(progress) }, onComplete = { //download complete button.text = "Open" }, onError = { //download failed button.text = "Retry" } )
American drivers spend an average of 293 hours a year behind the wheel. According to McKinsey & Company, in 2018 40 percent of them said they would change car brands for better connectivity. The drive to feeling connected increases every year, and strongly influences the world’s biggest industries.
At Intent, we’ve been looking at this automotive revolution for a while, and actually have worked on some car solutions so, obviously, we have our favorite car apps for iOS. I’ve decided to do a quick survey among friends, acquaintances, and colleagues inside and outside the company to find out which apps are rockin’ people’s phones. This resulted in a list of 5 apps most of us use almost every day. Wanna know who won our hearts?
Waze is the world’s largest community-based traffic and navigation app. They proved that when it comes to perfect navigations it’s exactly how their website says — “nothing can beat real people working together”. Waze is this perfect kind of collective consciousness that gets you alerted on everything that’ happening on the road — accidents, traffic jams, police patrols, or road hazards, and it happens in real-time. Not to mention that the maps are being constantly updated.
Another thing people love about Waze is that you can effortlessly sync with your friends via Facebook and when driving to the same destination coordinate everyone’s arrival times and check how they’re doing on the way.
Oh, almost forgot — with this app you can literally save your money by navigating to the cheapest gas stations on your route. How freakin’ cool is that?
And they have a carsharing app too!
Seriously, I love Waze.
Btw. Waze has 90mln users monthly in 185 countries
Imagine you’re hitting the long road with someone who:
Yes, my friends. Such a driving partner exists. And it’s called Audible. And it’s great for short rides too!
Btw. The longest audiobook ever is “50 Lectures” by Takaaki Yoshimoto (Japan) and it lasts 6,943 minutes (115 hours and 43 minutes). Challenge accepted?
Driving with friends and family on board is sweet but let’s be honest — sometimes you just want them to shut the f❤❤k up and you’ve every right to feel to it. Instead of saying something that will make you look like an asshole, you say (using this spontaneous voice of yours): “Hey guys, I’ve just discovered an endless source of great podcasts, you need to hear some of them right now!” and put on some nice podcast via Overcast before they answer. An easy and peaceful way! They’re not offended and you get to listen to what YOU want — is that a win-win situation or my glasses are just blurry?
Also, people that don’t like driving alone and/or feel lonely in a car will have great use of this app too — a lot of different people, topics, and opinions are just one touch away. Almost like Tinder, but without creeps!
Btw. Please NEVER Tinder and drive!
I strongly believe there’s no proper car ride without a sing-a-long and believe me with other inFullSingers by my side — iHeartRadio is the best app that will help you to climb on the top of your vocal skills (read: scream your head off).
But seriously, iHeartRadio gives you unlimited streaming music, thousands of live radio stations, a lot of podcasts, and playlists crafted for any moods or activities. You can search by music genres, artist radio stations, and also create your own playlists, of course. There’s so much audio content here my head hurts!
Sometimes you’re just too overloaded with stuff to do that you forget where the hell is your car parked. Don’t worry — there’s a kind soul that will guide you through the mysterious and sometimes even multi-level mazes of parking areas and the spiderwebs of streets. Apple Maps automatically detect where you’ve left your car, and help to keep your stress level low.
1 in 7 American drivers has forgotten where they parked their vehicles and have spent a collective 200 years looking for them.
Interested in AndroidAuto apps too? Don’t miss this article! Also, make sure to take a look at our car apps statistics. If you feel inspired to build an automotive app of your own, don't hesitate to request a free quote via the contact page or drop us an email.
Cover photo by Alessio Lin on Unsplash
Scrum. This mythical thing I’m sure you’ve at least heard about. There is also a pretty fair chance that you already know that Scrum is a framework for developing and delivering all sorts of products. That it has events, roles, values, and artifacts. If you don’t, you definitely should. If you do, you probably don’t. Trust me, if you’re about to take a PSD certification you’ll find out soon enough.
There are three main Scrum certificates — Professional Scrum Master, Professional Scrum Product Owner, and Professional Scrum Developer. It might be a little tricky when choosing the right one for yourself. One might wonder why even bother taking any other exam when you can just go with a Scrum Master title? The bottom line is that PSD or PSPO is not worse than PSM. Each has its own area of expertise and each is equally valuable. Developer’s exam focuses not only on Scrum’s theory but also requires strictly development-related knowledge like Test-Driven Development and Continuous Integration. So, if you’re a developer and you’re not planning to be a Scrum Master anytime soon, taking a PSD exam is more recommended. It will give you the skills to work efficiently as a professional in a Scrum-led project.
First and probably the most important rule is: STICK TO THE SCRUM GUIDE. There is no way around it. The Guide is short enough to get to know it in just a few readings, but you will probably need more than that to truly remember every detail. I would also recommend reading it in your native language at least once to make sure everything is clear. Also, don’t get me wrong — there are tons of different sources out there. Most of them are free and good. Reading them might be profitable as far as you can relate to every information you read to the Scrum Guide content.
PSD also requires knowledge that is specific to the developers’ world. That includes:
If you have any doubts about any of those subjects, I recommend filling all gaps before taking an exam.
Later, after the reading is done, you should proceed to practice. There are two online preparation tests I’ve been doing. First is the official Open Assessment. It contains solely legitimate questions that will most likely appear in the exact form on the exam. The second option is the Scrum Quiz provided by Mikhail Lapshin that covers the most important aspects of Scrum theory. When to know you’re ready? I had a simple rule. If you pass the official test with a 100% score five times in a row, then you most likely know enough to pass the exam. Take your time — mastering those quizzes will definitely pay you back.
The idea of the exam is rather simple. First, you create an account at scrum.org. Then you need to buy a password key for the PSD certificate. The cost, as of August 2018, is $200. When you’re ready you enter the password and the exam begins. There are 80 multiple-choice and true/false questions. Time-box is 60 minutes. You’ll need at least 85% to pass the exam which gives at least 68 positive answers. You get the result instantly after finishing the exam.
The good thing about the exam is that you can take it wherever and whenever you please. Thanks to that you can (and definitely should) make certain preparations. First of all, have a Scrum Guide by your side at all times. Best if it’s printed so you can quickly look-up things you’re not sure of.
Secondly, you might be tempted to google things as nobody is watching you — it’s not an entirely bad idea but you must know that almost always you won’t find exact answers to exact questions, and looking for others will be a waste of your precious time.
The system gives you a great feature that lets you mark questions you would like to come back later. Use it. Mark questions you’re don’t know an answer to and leave them for the end. The exam gives you less than 1 minute per answer so time is essential here.
What is most important in passing the PSD certification exam is that you are familiar with the type of questions you will be asked. There is a certain idea behind them which in short sounds like “Scrum is the best. It will cure all your problems and make your life a paradise.”. Seriously, that’s the key. Also, there won’t be too much time for contemplation so make sure you buckle down to preparations. Mostly to the official Open-Assessment as you will tackle the same questions during the exam. Lastly, don’t worry! The exam is fairly simple so you’ll be just fine! :)
430*10= 4300 ohm = 4.3 kOhm!That was easy, right? Of course, you are not the only one thinking that remembering all of these colors and values is a no-go. So a bunch of great people came up with an idea of a resistor color code calculators. You can find it here. Also, check out AppStore and Google Play for some awesome apps doing the same thing. They may also be handier. SMD components, as well as reading their values, will be covered in another blog post of this series. Stay tuned! So we’ve got potentiometers and fixed value resistors. These are the most widely used types of components, which regulate resistance.
contract NumberContract { address private contractIssuer; uint private number; modifier onlyContractIssuer { require(contractIssuer == msg.sender); _; } function NumberContract() public { contractIssuer = msg.sender; } function setNumber(uint newNumber) onlyContractIssuer public { number = newNumber; } }
Events are inheritable members of contracts. When they are called, they cause the arguments to be stored in the transaction’s log — a special data structure in the blockchain. These logs are associated with the address of the contract and will be incorporated into the blockchain and stay there as long as a block is accessible. Log and event data is not accessible from within contracts (not even from the contract that created them).
contract NumberContract { address private contractIssuer; uint private number; event NumberSet(uint number); modifier onlyContractIssuer { require(contractIssuer == msg.sender); _; } function NumberContract() public { contractIssuer = msg.sender; } function setNumber(uint newNumber) onlyContractIssuer public { number = newNumber; NumberSet(newNumber); } }
Solidity supports multiple inheritance by copying code including polymorphism. All function calls are virtual, which means that the most derived function is called, except when the contract name is explicitly given. When a contract inherits from multiple contracts, only a single contract is created on the blockchain, and the code from all the base contracts is copied into the created contract. The general inheritance system is very similar to Python’s, especially concerning multiple inheritance.Function visibility: external While in Solidity almost all types of function visibility (private, public, and internal) are intuitive and similar to these in Java, one is different. A function can be declared as external which means it can be called only from other contracts and by transactions. Calling it internally is impossible. Function modifiers: pure, view, payable When a function is declared as pure, it cannot modify or even access the state (variables, mappings, arrays, etc.). It’s the most restrictive modifier but it is the most secure and saves the most gas when applied. The view is slightly more allowing modifier. It basically acts the same as pure but allows access to the state (but it still cannot modify it though). When we want a function to be able to receive Ether together with a call, we declare it as payable. It allows for money transfers, deposits, and basically handling money in every way needed.
contract NumberContract { address private contractIssuer; uint private number; mapping(address => uint256) public availableWithdrawals; modifier onlyContractIssuer { require(contractIssuer == msg.sender); _; } modifier hasPositiveBalance(address user) { require(availableWithdrawals[user] > 0); _; } function NumberContract() public { contractIssuer = msg.sender; } function deposit() public payable { availableWithdrawals[msg.sender] = safeAdd(availableWithdrawals[msg.sender], msg.value) } function withdraw() public payable hasPositiveBalance(msg.sender) { uint256 amount = availableWithdrawals[msg.sender]; availableWithdrawals[msg.sender] = 0; msg.sender.transfer(amount); } function getAmountToWithdraw() public view returns (uint256) { return availableWithdrawals[msg.sender]; } function safeAdd(uint256 a, uint256 b) internal pure returns (uint256) { uint256 c = a + b; assert(c >= a); return c; } function setNumber(uint newNumber) onlyContractIssuer public { number = newNumber; } }
for (var i = 0; i < a.length; i++) { a[i] = i; }will enter an infinite loop if the "a" array is longer than 255 elements (the iterator will wrap around back to 0). This is despite the underlying VM using 256 bits to store this byte. You should just know about this and declare "i" as uint instead of var. Operator’s semantics
Operators have different semantics depending on whether the operands are literals or not. For example, 1/2 is 0.5, but x/y for x==1 and y==2 is 0. Precision of the operation is also determined in this manner — literals are arbitrary-precision, other values are constrained by their types.Mapping Mappings, unlike maps in Java, don’t throw an exception on non-existing keys. They just return the default value depending on the key type (when keys are integers, 0 will be returned). What’s more, there is no way to check if an element exists (like contains() in Java) — when 0 is returned we don’t know if a key was added to the mapping with value 0 or is it the default value being returned because there is no such key in the mapping. There’s also no built-in method of extracting a key or value sets from a mapping which means iterating over a key set is not possible.
protocol LocationFetchterDelegate: class { func locationFetcher(_ fetcher: LocationFetcher, didUpdateLocation location: CLLocation) } final class LocationFetcher { weak var delegate: LocationFetcherDelgate? //Class implementation goes here }This solution has a few problems:
final class LocationFetcher { let location: Observable<CLLocation> }As we can see, problem #1 is gone. We have one immutable property location which emits location updates and no component can modify it. Our program not only more deterministic but since we don’t need a specific reference to the delegate, we got rid of the additional protocol as well! Let’s see how subscribing to updated would look like:
let locationFetcher: LocationFetcher //Assume exists locationFetcher.location .observeOn(MainScheduler.instance) .subscribe(onNext: { print("new location: \($0)") }) .disposed(by: disposeBag)This solves problems #2 and #3. As we can see, RxSwift allows us to specify on which thread we’re receiving the updates, as well as multiple objects, can subscribe to a single stream to get updates via simple DSL-like API.
System listens to the presence of a person → Person near by the door (with smartphone) → System detects presence → Person launches the application → Person unlocks the door via the app → System verifies permissions → System unlocks the doorOur solution:
System listens to the presence of a person → Person near by the door (with iPhone) → System detects presence → Person presses the gate opening button on the door → System verifies the permissions → System unlocks the doorPlease, notice how comfortable it is. No waisting time for getting the phone, launching the app, and interacting with the app to open the door. It’s as simple as one button click. How safe is our solution? Please, keep in mind that the level of security against unauthorized use does not have to be very high here — the purpose is to prevent people who should not cross the building’s gate, but it won’t be a barrier for a person who previously possesses an iPhone with granted access to the gate. That’s why our solution is suitable for managing access to a building, but it should not be used to control access to housing. Here, we focus on the convenience of use with a reasonable level of security. Prerequisites:
$ sudo apt-get update $ sudo apt-get dist-upgrade $ curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash - $ sudo apt-get install -y nodejsThe above sequence guarantees the latest versions of the packages, and then we install the latest version of Node.js. To check whether the operation was successful and what Node.js version we have, run the command:
$ node --versionThe next step is to install the Node.js libraries that are used in the application and these are:
$ npm install uuid
$ npm install node-cache
$ npm install rpi-gpio
$ npm install rpi-gpio-buttons
$ npm install request
$ mkdir /home/pi/gateguardDownload the ZIP-ed source code from GitHub and unpack it, and then move *.js files to the newly created folder:
$ wget -O gateguard-rpi.zip https://github.com/inFullMobile/gateguard-rpi/archive/master.zip
$ unzip gateguard-rpi.zip
$ cp -r gateguard-rpi-master/* /home/pi/gateguard/We still have one more thing to configure, but we can’t do it until the app in the cloud is setup — it is about a generated URL address of our application, which will be included in the code on PRi, so that RPi can connect to it in the cloud. Raspberry Pi is almost ready… so it’s time to launch another element of our system — a service in the cloud, which will be responsible for coordinating communication between the iPhone and RPi. Two more elements of the system require an Xcode environment (preferably the newest one, currently Xcode 9.3) and if you do not have it yet, please install it from the Apple AppStore store (https://itunes.apple.com/pl/app/xcode/id497799835?mt = 12), then launch and complete the installation. The cloud app will be deployed on the Heroku platform, where we can host our application for free for development and education purposes. As the first step, I suggest setting up an account on the Heroku platform, within which our application will be deployed and maintained. Go to heroku.com and create an account (if you do not have one already). Then we will need the Heroku Command Line Interface (aka Heroku CLI) installed on the computer. We’ll use it to deploy the application. To install Heroku CLI, follow the instruction: Heroku CLI Installation. As the application is written with the Vapor framework (Swift on board), please install Vapor Tool Box which will support the app: Vapor Tool Box Installation. In the next step, download the application code from the repository. To do this efficiently, use the Git client: $ git clone https://github.com/inFullMobile/gateguard-cloud.git Let’s prepare our project for deployment with Heroku CLI commands:
$ cd gateguardcloud
$ rm -rf .git #it removes current git version control files $ heroku login #once asked, provide your username (email) and password
#create new git local repository
$ git init
$ git add .
$ git commit -m “Initial commit”We can now deploy our application to Heroku servers:
$ vapor heroku initas a result, you will be asked some questions and you may answer as I did:
Would you like to provide a custom Heroku app name?
y/n> n
Would you like to deploy to a region other than the US?
y/n> y
Region code (us/eu):
> eu
https://pacific-lowlands-45092.herokuapp.com/ | https://git.heroku.com/pacific-lowlands-45092.git
Would you like to provide a custom Heroku buildpack?
y/n> n
Setting buildpack...
Are you using a custom Executable name?
y/n> n
Setting procfile...
Committing procfile...
Would you like to push to Heroku now?
y/n> y
This may take a while...
Building on Heroku ... ~5-10 minutes [ • ]and finally
Building on Heroku ... ~5-10 minutes [Done]
Spinning up dynos [Done]
Visit https://dashboard.heroku.com/apps/
App is live on Heroku, visit
https://pacific-lowlands-45092.herokuapp.com/ | https://git.heroku.com/pacific-lowlands-45092.gitRemember generated domain for your cloud app for later use (in my case it is https://pacific-lowlands-45092.herokuapp.com) It is a good moment to return to the code on RPi and configure the correct URL address of our service in the cloud. To do this, reconnect to RPi via SSH and open the file /home/pi/gateguard/cloud_module.js for editing then enter the correct address of the service on Heroku (code: line 35). In my case it looks like this:
uri: ‘https://pacific-lowlands-45092.herokuapp.com/register-token'We need two more unique identifiers for Bluetooth communication, and you can get them by doing a 2× command:
$ uuidgenIn my case, it has been generated UUIDs:
93384AB6–9EB1–4AF2–90FB-F88ABB6F79AF
4E98BE1C-F8D9–46AD-9D08-C0AAA7DFEE7A
this.bleServiceUUID = '93384AB6-9EB1-4AF2-90FB-F88ABB6F79AF' this.bleCharacteristicUUID = '4E98BE1C-F8D9-46AD-9D08-C0AAA7DFEE7A'Are you ready to launch the program on PRi? 10… 9… 8…
$ nohup sudo node /home/pi/gateguard/gateguard.js > /home/pi/gateguard/gateguard.log &where the nohup command will guarantee the work of the program in the background as we disconnect from the RPi, while sudo is necessary for our program to work on permissions allowing it to access BLE and IO ports of our microcomputer. All logs thrown by our program will go to gateguard.log file.
$ git clone https://github.com/inFullMobile/gateguard-ios.gitand open the project in Xcode. Before running on the iPhone, set the correct URL address of the app in the cloud, what can be done in the project’s source file HttpClient.swift — change the address assigned to the constant:
// That's my URL address! Use yours ;) static let gateGuardHost = URL(string: "https://pacific-lowlands-45092.herokuapp.com")!Now configure the correct UUIDs for BLE service and characteristic in the BLEService.swift file (remember to assign appropriate UUIDs to service and characteristics — the same as in sources of PRi app):
static var serviceUUID: CBUUID {
return CBUUID(string: "93384AB6-9EB1-4AF2-90FB-F88ABB6F79AF")
}
static var newTokenNotificationCharacteristicUUID: CBUUID {
return CBUUID(string: "4E98BE1C-F8D9-46AD-9D08-C0AAA7DFEE7A")
}Ensure that Bluetooth on iPhone is ON! Now you can launch the application on your phone. Congratulations! You did it! Our GateGuard should now work as expected — test it with alternately Bluetooth switched ON and OFF on the phone, and you will observe the result of the action in the form of flashing or glowing LEDs and the operation of the relay, which connected to the electro lock will unlock the gate to the building. For your information, I’ll explain the LEDs behavior:
onSubscribe(maxValueSize, callback) { console.log('BLE characteristic subscribed') shared.subscribedToCharacteristic = true this.valueDidChangeCallback = callback }Generation of authorization token is done by the code:
const tokenId = Math.floor(Math.random() * MAX_TOKEN_ID) const token = uuid()Transferring token to the cloud app is done this way:
registerTokenInCloud(tokenId, token, completionCallback) { console.log('register token in cloud service') if (!this.requestInProgress) { console.log('http request') const _this = this this.requestInProgress = true request(this._requestOptions(tokenId, token), function(error, response, body) { console.log('http response') _this.requestInProgress = false if (!error && response.statusCode == 200) { console.log('New token registered on cloud service with success') completionCallback(true) } else { console.error('Error while registering token in cloud service: ' + error) completionCallback(false) } }) } } _requestOptions(tokenId, token) { return { uri: 'https://gateguard.herokuapp.com/register-token', method: 'POST', json: { "id": tokenId, "token": token } } }Another step is the validation of the token:
onWriteRequest(data, offset, withoutResponse, callback) { console.log('Authorization request from mobile phone') this.greenLedManager.off() var dataParts = data.toString().split('|') const tokenId = dataParts[0] const token = dataParts[1] var cachedToken = null try { cachedToken = shared.cache.get(tokenId, true) if (token.toUpperCase() == cachedToken.toUpperCase()) { console.log('Token is valid! Opening the gate...') this.greenLedManager.on(led.LEDModeEnum.solid, shared.ELECTROLOCK_ON_DURATION) this.relayManager.on(shared.ELECTROLOCK_ON_DURATION) } else { console.log('Token is invalid - access to the gate denied!') this.redLedManager.on(led.LEDModeEnum.solid, shared.ELECTROLOCK_ON_DURATION) } callback(0) } catch (err) { console.log('Error: ' + err) callback(1) } }Electro lock controlling is implemented in relay_module.js. Opening the gate is happening here:
this.relayManager.on(shared.ELECTROLOCK_ON_DURATION)
private func scanForPeripheral() { guard self.isElectronicKeyActive else { return } self.centralManager.scanForPeripherals(withServices: [CBUUID.serviceUUID], options: [CBCentralManagerScanOptionAllowDuplicatesKey: NSNumber(booleanLiteral: true)]) }and successively
func peripheral(_ peripheral: CBPeripheral, didDiscoverServices error: Error?) { guard error == nil else { return } guard let service = peripheral.services?.filter({ $0.uuid == CBUUID.serviceUUID }).first else { return } peripheral.discoverCharacteristics([CBUUID.newTokenNotificationCharacteristicUUID], for: service) } func peripheral(_ peripheral: CBPeripheral, didDiscoverCharacteristicsFor service: CBService, error: Error?) { guard let characteristic = service.characteristics?.filter({ $0.uuid == CBUUID.newTokenNotificationCharacteristicUUID }).first else { return } self.storedCharacteristic = characteristic peripheral.setNotifyValue(true, for: characteristic) }Once the peripheral device is connected and the gate’s button pressed, it generates new authorization token (unknown for iPhone’s app) and requests for it. The token is sent to RPi app right after being taken from cloud service.
self.bleService.tokenDidRequestCallback = { [weak tokenService, weak bleService] (_ tokenId: Int) in tokenService?.getToken(with: tokenId) { (_ result) in switch result { case .success(let token): bleService?.respond(withToken: token) case .error(let error): let errorMessage = String(describing: error) print("Error: \(errorMessage)") } } }
post("register-token") { req in guard let id = req.json?["id"]?.int, let token = req.json?["token"]?.string else { return Response(status: .badRequest) } try self.cache.set("\(id)", token, expiration: Date(timeIntervalSinceNow: Constants.tokenDuration)) return Response(status: .ok) }The second is implemented this way:
get("token") { req in guard let id = req.query?["id"]?.int else { return Response(status: .badRequest) } var json = JSON() guard let token = try self.cache.get("\(id)") else { return Response(status: .noContent) } try json.set("id", id) try json.set("token", token) return json }As you can see, all pieces of the system talk to each other via BLE and HTTPS. Details of implementation you can always find in provided repositories. That was huge fun for me to build that system and I hope you may find it interesting too.
The goal of CanonHackathon was to prove that video projectors can be used not only for watching movies.Teams came up with different ideas. Above you can see the team that created an interactive art installation made out of 3 projectors and Arduino and a plant. Another team created a project that helped people to connect a lot of projectors together in an easy way.
repositories { maven { url 'https://maven.infullmobile.com/public' } }
compile 'com.infullmobile.android:infullmvp-kotlin:1.1.14' testCcompile 'com.infullmobile.android:infullmvp-kotlin-basetest:1.1.14'
allprojects { repositories { ... maven { url 'https://jitpack.io' } } }Include the following dependency:
compile 'com.github.AmeerHamzaaa:TNImageView-Android:0.1.2'Create an activity layout so it looks like this:
<RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <ImageView android:id="@+id/slideShowImage" android:layout_width="100dp" android:layout_height="100dp" android:clickable="true" android:src="@drawable/ice_cream"/> </RelativeLayout>Now let's initialize a TNImageView:
class MainActivityView @Inject constructor() : PresentedActivityView<MainActivityPresenter>() { @LayoutRes override val layoutResId = R.layout.activity_main val slideShowImage: ImageView by bindView(R.id.slideShowImage) override fun onViewsBound() { TNImageView().makeRotatableScalable(slideShowImage) } }Our result:
private fun skewBitmap(src: Bitmap, xSkew: Float, ySkew: Float): Bitmap { val xCoordinates = 0 val yCoordinates = 0 val matrix = Matrix() matrix.postSkew(xSkew, ySkew) return Bitmap.createBitmap(src, xCoordinates, yCoordinates, src.width, src.height, matrix, true) }So now we can skew our ImageView
data class ResponseInstaByTag( @field:SerializedName("data") val data: List<DataItem> ) data class DataItem( @field:SerializedName("images") val images: Images ) data class Images( @field:SerializedName("standard_resolution") val standardResolution: StandardResolution ) data class StandardResolution( @field:SerializedName("url") val url: String )2. Define API service interface:
interface InstagramApiService { @GET("tags/{tag}/media/recent?access_token=YOUR_ACCESS_TOKEN") fun getPicsByTag(@Path("tag") tag: String): Single<ResponseInstaByTag> }3. Initialize HTTP client Retrofit. The whole main module should look like this:
@Module public abstract class MainActivityModule { private static String BASE_URL = "https://api.instagram.com/v1/"; @MainActivityScope @Provides static Retrofit providesRetrofit() { return new Retrofit.Builder() .baseUrl(BASE_URL) .addConverterFactory(GsonConverterFactory.create()) .addCallAdapterFactory(RxJava2CallAdapterFactory.create()) .build(); } @MainActivityScope @Provides static MainActivityView providesMvpView() { return new MainActivityView(); } @MainActivityScope @Provides static InstagramApiService providesInstagramApiService(Retrofit retrofit) { return retrofit.create(InstagramApiService.class); } @MainActivityScope @Provides static Scheduler providesScheduler() { return Schedulers.io(); } @MainActivityScope @Provides static GetPicturesByTagUseCase providesGetPicturesByTagUseCase(Scheduler scheduler, InstagramApiService instagramApiService) { return new GetPicturesByTagUseCase(scheduler, instagramApiService); } @MainActivityScope @Provides static MainActivityModel providesMvpModel(GetPicturesByTagUseCase getPicturesByTagUseCase) { return new MainActivityModel(getPicturesByTagUseCase); } @MainActivityScope @Provides static MainActivityPresenter providesMvpPresenter( MainActivityModel model, MainActivityView view ) { return new MainActivityPresenter(model, view); } @MainActivityScope @Binds abstract Context bindsContext(SampleMvpActivity activity); }4. Fetch images from Instagram:
class MainActivityPresenter @Inject constructor( private val model: MainActivityModel, view: MainActivityView ) : Presenter<MainActivityView>(view) { private var disposableApiService: Disposable? = null private val tagName = "sky" override fun bind(intentBundle: Bundle, savedInstanceState: Bundle, intentData: Uri?) { loadPictures(tagName) } private fun loadPictures(tag: String) { disposableApiService = model.getPicturesByTag(tag) .subscribe( { imagesList -> val links = imagesList.data.map { dataItem -> dataItem.images.standardResolution.url } presentedView.startSlideShow(links) }, { handleError(it) } ) } }5. For starting slideshow we can use Interval operatorObservable.interval(delaySlideshow, TimeUnit.SECONDS. It will run loadImage() method every 3 seconds. Please pay attention to that we first load a picture and after we make a skew transformation on it. Why? Because if we'll not transform picture, we'll lose the previous state of a transformation. Add this code to MainActivityPresenter
fun startSlideShow(urls: List<String>) { val delaySlideshow = 3L disposableTimer = Observable.interval(delaySlideshow, TimeUnit.SECONDS) .observeOn(AndroidSchedulers.mainThread()) .map { getNextPictureUrl(currentIndex, urls) } .flatMap { url -> loadBitmapByUrl(url) } .map { bitmap -> skewBitmap(bitmap, skewX, skewY) } .subscribe( { bitmap -> presentedView.showPicture(bitmap) }, { handleError(it) } ) } private fun getNextPictureUrl(urls: List<String>): String { if (currentIndex >= urls.size) currentIndex = 0 return urls[currentIndex++] } private fun loadBitmapByUrl(url: String): Observable<Bitmap> { return Observable.create<Bitmap> { emiter -> Picasso.with(context).load(url).into(object : Target { override fun onBitmapLoaded(bitmapParam: Bitmap, from: LoadedFrom?) { emiter.onSuccess(bitmapWithTransformation) } override fun onPrepareLoad(placeHolderDrawable: Drawable?) {} override fun onBitmapFailed(errorDrawable: Drawable?) { emiter.onError(IllegalStateException("Bitmap loading has failed")) } }) } }6. Show loaded and transformed pictures on a screen. Add this code to MainActivityView:
fun showPicture(bitmap: Bitmap) = slideShowImage.setImageBitmap(bitmap)7. Always remember to dispose of disposables in a Presenter class to avoid memory leaks:
override fun unbind() { super.unbind() disposableApiService?.dispose() disposableTimer?.dispose() }8. Connect Chromecast to the projector. In your phone go to Settings -> Connected devices -> Cast -> Select your Chromecast. So now we have a slideshow on almost any 3d object in our room.
import UIKit let printerOperation = BlockOperation() printerOperation.addExecutionBlock { print("I") } printerOperation.addExecutionBlock { print("am") } printerOperation.addExecutionBlock { print("printing") } printerOperation.addExecutionBlock { print("block") } printerOperation.addExecutionBlock { print("operation") } printerOperation.completionBlock = { print("I'm done printing") } let operationQueue = OperationQueue() operationQueue.addOperation(printerOperation)
import UIKit class MonoImageOperation: Operation { var inputImage: UIImage? var outputImage: UIImage? init(inputImage: UIImage) { self.inputImage = inputImage } override public func main() { if self.isCancelled { return } outputImage = applyMonoEffectTo(image: inputImage) } private func applyMonoEffectTo(image: UIImage?) -> UIImage? { guard let image = image, let ciImage = CIImage(image: image), let mono = CIFilter(name: "CIPhotoEffectMono", withInputParameters: [kCIInputImageKey: ciImage]) else { return nil } let ciContext = CIContext() guard let monoImage = mono.outputImage, let cgImage = ciContext.createCGImage(monoImage, from: monoImage.extent) else { return nil } return UIImage(cgImage: cgImage) } }
import UIKit class ViewController: UIViewController { @IBOutlet weak var imageView: UIImageView! override func viewDidLoad() { super.viewDidLoad() let image = UIImage(named: "image-1.jpg") let monoImageOperation = MonoImageOperation(inputImage: image!) monoImageOperation.completionBlock = { DispatchQueue.main.async { self.imageView.image = monoImageOperation.outputImage } } let operationQueue = OperationQueue() operationQueue.addOperation(monoImageOperation) } }
import Foundation class AsyncOperation: Operation { public enum State: String { case ready, executing, finished fileprivate var keyPath: String { return "is" + rawValue.capitalized } } public var state = State.ready { willSet { willChangeValue(forKey: state.keyPath) willChangeValue(forKey: newValue.keyPath) } didSet { didChangeValue(forKey: oldValue.keyPath) didChangeValue(forKey: state.keyPath) } } }
extension AsyncOperation { override var isAsynchronous: Bool { return true } override var isExecuting: Bool { return state == .executing } override var isFinished: Bool { return state == .finished } override func start() { if isCancelled { return } main() state = .executing } }
import UIKit class AsyncImageDownloadOperation: AsyncOperation { var downloadedImage: UIImage? override func main() { let defaultSession = URLSession(configuration: .default) guard let imgUrl = URL(string: "https://unsplash.com/photos/M9O6GRrEEDY/download?force=true") else { return } let dataTask = defaultSession.dataTask(with: imgUrl) { (data, response, error) in if let error = error { print("Image download encountered an error: \(error.localizedDescription)") } else if let data = data, let response = response as? HTTPURLResponse, response.statusCode == 200 { if self.isCancelled { self.state = .finished return } let image = UIImage(data: data) self.downloadedImage = image self.state = .finished } } dataTask.resume() } }
Sia uses encryption and erasure coding to ensure that files are private and remain available even if hosts go offline. The uploading process is as follows. First, the file is striped into chunks of 40MiB. Reed-Solomon erasure coding is then applied on each chunk, expanding them into 30 pieces of 4MiB. Erasure coding is like an M-of-N multisig protocol, but for data: out of N total pieces, only M are needed to recover the full 40MiB chunk. This ensures a high level of redundancy, much greater than traditional replication. Each piece is then encrypted with the Twofish algorithm. It was one of the five finalists of the Advanced Encryption Standard contest, but it was not selected for standardization. Nonetheless, this algorithm is considered very secure. Finally the pieces are sent to hosts to be stored. Currently, no host receives more than one piece of any given chunk. For example, host 1 might contain the first piece of chunk 1, chunk 2, chunk 3, etc., and host 2 might contain the second piece of the same chunks. This ensures that if host 1 is offline, you can still download pieces from every chunk. Even if host 1 is not offline, but merely slow, this scheme prevents you from being bottlenecked by the slow host.
1. Brass Golem is where we are at the moment with our proof-of-concept, in alpha testing now. This current version of Golem is only focused on rendering in Blender and LuxRender, and although it will be useful to CGI artists, we consider CGI rendering to be a proof of concept primarily, and also a training ground. Brass Golem should be frozen within 6 months after end of crowdfunding period and a full battery of tests. Even though we do not expect that Blender CGI rendering will create enough turnover to justify all the work we have put into the project, this will be the first decentralised compute market. 2. Clay Golem is a big leap from the Brass milestone. Clay milestone introduces the Task API and the Application Registry, which together are going to make Golem a multi-purpose, generalised distributed computation solution. Developers now have the means to integrate with Golem. This advance, however, may come at the cost of compromised stability and security, so this version should be considered an experiment for early adopters and tech enthusiasts. Prototype your new ideas and solutions on Clay. 3. Stone Golem will add more security and stability, but also enhance the functionalities implemented in Clay. An advanced version of the Task API will be introduced. The Application Registry will be complemented by the Certification Mechanism that will create a community-driven trust network for applications. Also, the Transaction Framework will create an environment that will allow Golem to be used in a SaaS model. 4. Iron is a deeply tested Golem that gives more freedom to developers, allowing them to create applications that use an Internet connection or applications that run outside the sandbox. Of course, the decision to accept higher-risk applications will still belong to the providers renting their compute power. Iron Golem should be robust, highly resistant to attacks, stable and scalable. Iron will also introduce various tools for developers that will make application creation far easier. Finally, the Golem Standard Library will be implemented.
- simple correctness: checking of the result, eg. proof-of-work, - redundant computation: ie. a few providers compute same part of the task and their results are compared, - computing small, random part of the task and comparing this part with the result sent by the provider, ie. comparing the colour of few random pixels in rendered picture, - analysis of output logs.
import CoreBluetooth class MiService: NSObject, CBCentralManagerDelegate, CBPeripheralDelegate { lazy var manager = CBCentralManager(delegate: self, queue: DispatchQueue.main, options: nil) }Note: MiService class will serve as a container for all Bluetooth-related logic and all methods and properties listed in snippets below should be added to it. CBCentralManager is designed to communicate through delegate methods listed in the CBCentralManagerDelegate protocol. It has one non-optional method — centralManagerDidUpdateState(_:) — and that’s where we need to start. It’s called whenever the Bluetooth module in an iOS device changes its state — e.g. when you turn on or off Bluetooth in the Settings app. Additionally, it’s called just after the manager has been initialized. The state value that we want — in order to proceed — is .powerOn (so make sure to have your Bluetooth activated). When a manager is in that state, it can discover peripherals and connect to them.
func centralManagerDidUpdateState(_ central: CBCentralManager) { if central.state == .poweredOn { manager.scanForPeripherals(withServices: nil, options: nil) } } var discoveredPeripherals: [CBPeripheral] = [] func centralManager(_ central: CBCentralManager, didDiscover peripheral: CBPeripheral, advertisementData: [String : Any], rssi RSSI: NSNumber) { print(peripheral) discoveredPeripherals.append(peripheral) }
var miBand: CBPeripheral? func connectToPeripheral(at index: Int) { manager.connect(discoveredPeripherals[index], options: nil) } func centralManager(_ central: CBCentralManager, didConnect peripheral: CBPeripheral) { manager.stopScan() miBand = peripheral peripheral.delegate = self peripheral.discoverServices(nil) } func centralManager(_ central: CBCentralManager, didFailToConnect peripheral: CBPeripheral, error: Error?) { print(error) }The result of the connect operation is returned through one of the two methods from CBCentralManagerDelegate protocol: centralManager(_: didConnect:), in case of success, or centralManager(_: didFailToConnect: error:), in case of failure. Assuming everything went well, we can stop scanning for nearby devices and save the connected peripheral in a variable for convenience. Next, we should make our MiService implement CBPeripheralDelegate protocol, so it can become a delegate of the chosen peripheral and receive notifications about its state changes.
func peripheral(_ peripheral: CBPeripheral, didDiscoverServices error: Error?) { print(error ?? peripheral.services) peripheral.services?.forEach { service in peripheral.discoverCharacteristics(nil, for: service) } }Once we have a list of available services we can discover their characteristics. Discovered characteristics are available under services’ characteristics property, after receiving a call to peripheral(_: didDiscoverCharacteristicsFor: error:) method.
func peripheral(_ peripheral: CBPeripheral, didDiscoverCharacteristicsFor service: CBService, error: Error?) { print(error ?? service.characteristics) service.characteristics?.forEach { characteristic in if characteristic.properties.contains(.read) { peripheral.readValue(for: characteristic) } if characteristic.properties.contains(.notify) { peripheral.setNotifyValue(true, for: characteristic) } } }There is, however, one last setup step we need to perform. Reading a characteristic’s value is an asynchronous operation, so to receive the actual value we need to implement peripheral(_: didUpdateValueFor: error:) method from CBPeripheralDelegate protocol.
func peripheral(_ peripheral: CBPeripheral, didUpdateValueFor characteristic: CBCharacteristic, error: Error?) { let valueBytes: [UInt8] = value?.map { $0 } print("New value for: \(characteristic)") }When you run your application and connect to your band, you should see these kinds of logs:
New value for: <CBCharacteristic: 0x1c40bf620, UUID = 00000007-0000-3512-2118-0009AF100700, properties = 0x12, value = <0c0b0000 00070000 00010000 00>, notifying = YES>The bolded part is the new value of a given characteristic. This particular log entry describes the characteristic, which holds information about the number of steps and meters covered and kilocalories burned today. However, when analyzing these bytes, it’s important to keep in mind that the majority of today’s digital devices use little-endian byte ordering, which means that the least significant bytes of a number are on its left side. Each of the aforementioned values is stored on 4 bytes: steps count on bytes 1–4, meters on bytes 5–8, and kilocalories on bytes 9–12. So e.g. to read a number of steps we take bytes 0b, 00, 00, 00 as UInt8s.
<0c 0b000000 07000000 01000000>Then we cast each to UInt32, shift numbers left bitwise by subsequent multiples of 8, starting from 0 and finally sum everything. I’ve created a simple extension, which does just that:
extension UInt32 { static func from(bytes: [UInt8]) -> UInt32? { guard bytes.count <= 4 else { return nil } return bytes .enumerated() .map { UInt32($0.element) << UInt32($0.offset * 8) } .reduce(0, +) } }So if you want to extract steps count inperipheral(_: didUpdateValueFor: error:) method you can do it like this:
let stepsCount = UInt32.from(bytes: Array(valueBytes[1...4]))
func measureHeartRate() { guard let miBand = miBand, let hrControlPoint = miBand.services?.first(where: { $0.uuid.uuidString == "180D" })? .characteristics?.first(where: { $0.uuid.uuidString == "2A39" }) else { return } miBand.writeValue(Data(bytes: [0x15, 0x2, 0x1]), for: hrControlPoint, type: .withResponse) }After that, your band should start blinking with a green light on its backside, which means, it’s actually measuring your heart rate. The measurement usually takes a few seconds and after that, you should receive its value in Heart Rate Measurement characteristic update notification (assuming that, you have registered for its notifications).
We’ve all been there. You just found some new cool game that is going to make your way home less unbearable. You run it and…
Cool Game Would Like to Use Your Location popup says.
“Well, okay, but why?”
Cool Game needs your location right in the subtitle.
“Alright, that explains a lot. But fine — I’ll give it a shot” you think and hit Allow.
Cool Game Would Like to Access Your Photos.
“What? It’s a game! Why would you want that?!”.
That’s about the point when you get annoyed enough to go back to Home Screen and delete the app. Lots of people do.
The first interaction with the app is extremely important. Display multiple pop-ups asking for different permissions to user's sensitive data without proper explanation and you might find your user among the 23% of people who deleted the app just after first use. To prevent that from happening I listed some simple guidelines which you should always follow.
Bombarding the user with multiple popups is never a good idea. Especially immediately after the first launch. That makes the app seem intrusive. Users want to feel their privacy is respected and one way of doing so is creating a reliable and honest onboarding process.
Clearly educating why the app needs certain permissions builds trust. Users should understand that if they want to use a certain feature they need to provide certain data.
Another way is asking for permission in a context. It’s in most cases more effective because of onboarding caries one significant risk. Users need to make a decision up-front and even with proper education, they might not be sure if they want to use the feature requiring permission.
On the other hand, asking for granting the app gallery access during the process of sharing a photo eliminates that risk because the user already made the decision. They want to send the picture and in order to do so, they know they must press Allow. Plain and simple.
But what if the user doesn’t agree? Well, they must face the consequences ;) No, really. If the user denied permission that is critical to the app, there should be a clear explanation of why permission is necessary. You can always provide a button transferring to settings where users can re-allow it, but it might be risky on iOS as there are no official guidelines and the app can be rejected by Apple during the process of publishing the app to the AppStore. Also don’t try to overcome the denied permission — it was a user’s conscious decision and he has to live with that. It is still a good option to remind users that they can always change their mind although the iOS app can ask for certain permission only once and digging through settings is very poor UX. It’s a long, unintuitive process and there is no way to guide the user through it.
Permissions might be tricky but when handled with caution they might build reliance and trust. Just remember — ask nicely and you shall receive!
With the recent surge in Bitcoin’s price, the public is getting more and more divided as to its future. Is it a bubble that’s going to pop at some point or is it going to get widely adopted? Will it leave millions of people with broken dreams and terrible financial losses or is it going to replace traditional currencies?
Even the most renowned and experienced players in the financial markets are getting confused and changing their opinion on this issue. While this debate is getting the most attention in the crypto-world, there are more things happening that aren’t getting as much spotlight. For example, more than 1000 teams are now working on their own blockchain-based cryptocurrencies. They are all aimed at revolutionizing how we do financial transactions, data storage, healthcare, computations, communication, and much more. Today we are going to take a closer look at two of them to see how they can improve our lives.
One of the main drawbacks of Bitcoin is its lack of anonymity. If you have a computer that’s powerful enough, you can trace back every transaction ever recorded, and e.g. calculate how much money every wallet holds. NAV Coin is a cryptocurrency which prevents that by enabling private money transfers. It also aims at becoming a default method of transferring money with fast transaction times and low fees. A single operation takes 30 seconds and costs 0.01% NAV or 5 minutes for 0.5% NAV if you opt for a private transfer.
At its foundation, NAV Coin makes use of the 2048-bit version of the RSA algorithm capable of scaling up to 4096-bit with ease. It has been widely studied, tested, used, and not broken since its inception in 1977. This is the main factor that differentiates this currency from its main competitors like Monero and ZCash who opted for creating their own solutions (CryptoNote and zk-Snark respectively) which are fairly new and not tested nearly as much.
To make transactions truly anonymous, NAV Coin became the first cryptocurrency to operate on a dual blockchain — the additional one breaks the connection between a sender and a receiver as stated in their white paper:
The main technique that we have employed is to use a second blockchain we call the Subchain.
Instead of sending NAV directly to the receiver, the wallet encrypts the receiver’s address and sends the transaction to one of the addresses provided by the randomly selected processing server. When this server receives this transaction, it creates a transaction of arbitrary size on the Subchain which it sends it a randomly selected outgoing server.
This Subchain transaction has the receiver’s address and the amount of NAV to send encrypted and attached to it. When the outgoing server receives the Subchain transaction, it decrypts the data, randomizes the transaction amounts and sends the NAV to their intended recipient from a preloaded pool of NAV that is waiting on the outgoing server.
After the outgoing server has sent out the randomized NAV to the intended recipient, the incoming server will join together any NAV which has been processed and on the next transaction cycle send it to the outgoing server to replenish the preloaded pool of NAV for future transactions.
The consequence of this is that we have broken the transactional link between sender and receiver on the Nav Coin blockchain by routing the transaction information through the Subchain. The NAV sent to the recipient are not in any way connected to the NAV that are received.
Moreover, the NAV Coin team wrote a detailed article about how anonymous transactions work under the hood.
The NAV Coin team is currently working on a number of improvements to their cryptocurrency, e.g. further simplifying and securing the wallet and the transactions, cold staking, or enabling people to build Anonymous Decentralized Apps on top of the existing dual blockchain system. However, the most game-changing feature might be the NavTech Polymorph. It’s a partnership with Changelly exchange that will allow people to use NAV Coin’s dual blockchain system to perform anonymous transfers using any cryptocurrency supported by the Changelly platform (at the moment there are over 80 of them). Users will be able to make private and anonymous transactions also using currencies that do not support privacy by themselves. Moreover, coins can be exchanged on the fly which means that you can anonymously send someone e.g. Bitcoins on their Litecoin wallet.
Most of the world’s wealth is stored electronically — financial records, real-estate records, medical records, etc. However, they are being kept in centralized databases so we have to trust somebody to store them, and also they can be hacked (and they have been multiple times this year alone). Factom is a collaborative platform to preserve, ensure, and validate digital assets that aim at resolving these problems. It places data in its own structures, which are shared and secured over a distributed hash table (much like torrent files).
It enables people and businesses to use a mathematically provable “notarization” service. It also has built-in layers of redundant security that other blockchains do not offer. The Factom Blockchain anchors itself into the Bitcoin blockchain (and others) to take advantage of the security of Bitcoin’s hash rate. The layering effect of security ensures the immutability of its blocks.
Factom is most easily understood as a protocol that provides unlimited books of blank paper. Users of the protocol can take a book, label it with the title of their choice, open the book, and write on a page. When that page is submitted to Factom it cannot be altered or deleted. Nobody can back-date a page. All the data written into the book is preserved in the order it was presented to the Factom protocol.
Factom guarantees security of the records by three different methods:
During an interview Factom’s CEO Paul Snow described three major use cases that are already being implemented thanks to partnerships with the Department of Homeland Security and the Gates Foundation:
1. We are working with the DHS [Department of Homeland Security] to provide audit trails for data collection on U.S. borders. Certainly, our technology will ensure sensors are secure against those that would tamper with them. Our technology will also provide audit trails that ensure other parties that the data collected is properly disclosed when required, the integrity has been maintained, and that data held back as irrelevant is provably irrelevant (not collected within some timeframe, or at some location).
2. We are working with the Gates Foundation to ensure medical records are maintained, and available to parties providing care in developing countries to individuals that may have been treated by many different organisations in the past. This application has to be available when needed, transportable to remote locations without Internet access, secure, private, and require little in hardware and human resources.
3. We are working on data management applications to ensure that mortgages can be processed faster, cheaper, and within the current regulatory framework. This involves auditing and tracking data collection from many different parties over time about information covering every aspect of a mortgage. Income verification, property history and maintenance, taxes paid/owed, payment histories, property surveys, zoning, etc., all produce many documents that must be reviewed in the course of issuing and maintaining a mortgage.
The third use case would have been particularly useful during the 2008 housing crisis when banks were buying each other and had to merge huge amounts of data which resulted in thousands of documents being lost. That alone cost banks billions but can be easily prevented in the future thanks to Factom. At the time of writing this article, there are over 140 mln records, 11 mln entries, and 110,000 anchors on the Bitcoin’s blockchain which shows that it’s already being used heavily.
Factom created two types of cryptocurrency. The first one, Entry Credits, is used to pay for the data stored on the blockchain (1kb = 1EC) and can be exchanged only for the second one — Factoids, as it is not publicly traded on any exchange. The Factoid, on the other hand, is more traditional (if you can say it about any cryptocurrency) — coins are being used to secure and maintain the blockchain. The exchange rate between the two coins varies to maintain the price of 1 EC at $0.0001.
The process of adding data to the system consists of 10 minute-long blocks. At the beginning of each minute, every server takes responsibility for a subsection of the existing chains. When a user submits their entry to the system, one of the servers adds it at the end of the appropriate chain (to speed up the process of searching for the requested entry, they are grouped in chains so that when you’re looking for a particular one, you have to only search through the entries that relate to the requested one in some way).
After that, all of the servers validate the new state and add the new entry to their copies of chains. At the end of the minute, all servers confirm that they hold the same data and reveal a deterministic secret number (Reverse Hash which is a successive pre-image of a long hash chain). The collection of Reverse Hashes are then combined to create a seed to reassign responsibility for chains among the servers for the next minute.
This process is then repeated 10 times. After the 10th minute ends, the system randomly selects the server that is going to write the anchor into the Bitcoin blockchain by performing a transaction that stores all the needed hashes.
As we can see from the two examples provided above, blockchain and cryptocurrency can resolve many real-world problems. These are only a few of the most interesting projects involving blockchain. It’s still a very fresh and quickly evolving technology, so it’s very likely that in the future we’re going to see many innovative applications that no one even thought of yet.
Continue reading about altcoins in the second part of this article
Pamela is currently a CTO at Woebot, a friendly chatbot project that teaches Cognitive Behavioral Therapy techniques to help people regulate their mood and energy levels. Previously she worked in both Google Mountain View & Google Australia, doing developer relations for the Maps API and Wave APIs. In 2010 she founded the GirlDevelopIt SF chapter, which is a non-profit organization teaching women how to code. They host a wide range of technical classes and workshops including beginner classes and more advanced topics.
Khalia Braswell is a user-experience engineer at Apple. She’s also created INTech Camp for Girls in North Carolina, which the main goal is to inspire girls to do innovative tech projects. To date, INTech has reached hundreds of minority girls through hosting dedicated camps. In 2016 she got to the 30 Under 30 list by the Charlotte Mecklenburg Black Chamber of Commerce, the 10 Black Female Leaders in Tech to Watch by Hackbright Academy, 6 Young Black Women Making a Difference in Tech by New Relic and The 10: These Black Women in Computer Science Are Changing the Face of Tech by The Root.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.