What is DevOps?
The combination of “development” and “operations” known as DevOps, has brought about a cultural change closing the gap between two important teams that had historically functioned separately. This important relationship reflects a culture that fosters collaboration between both teams to automate and integrate processes between software development and IT teams so that they can build, experiment and project software in the shortest possible time.
Thanks to this, the capacity of a company or institution to provide almost immediate applications and services is also increased, allowing them to better serve their clients and compete more efficiently in the market. The teams can use innovative practices to automate processes that were previously carried out manually, always relying on tools that help them manage and optimize applications faster.
DevOps can operate under a variety of models where even QA and Security teams are also integrated into the mix of development and operations teams, from development, testing to application deployment. This is especially common when security is a priority in the project.
How do I know if I need DevOps?
If your company or organization is going through any of these situations, it is quite likely that you will need to implement a dynamic DevOps:
The development team is having trouble optimizing old code, creating new code, or preparing new product features.
Disparities between development and production teams have caused errors and compatibility failures.
The improvements that were implemented (related to the software deployment) are currently obsolete.
Your company is experiencing a slow time-to-market, that is, the process that goes from code development to production is too slow and inefficient.
How to carry out a DevOps implementation process?
Implement a DevOps mindset
Identify areas where your company’s current delivery process is ineffective and needs to be optimized. This will be a great opportunity to bring about real change in your organization, so you must be open to experiment. Learn from possible short-term failures, they will help you improve, do not get used to accepting inefficient dynamics because it is “traditional”.
Take advantage of the metrics
It is important to choose the correct metrics to verify and monitor the project. Do not be afraid to measure what might not look good at first, since, realizing this, it is that you will be able to notice real progress, as well as real commercial benefits.
Accept that there is no single path
Each organization must go through different DevOps circumstances linked to its business and culture, where the way they should do things will depend more on how their habits and patterns in the teams that change may change. of the tools they use to enable automation.
Don’t leave QA aside
Companies that implement DevOps often focus on automating deployments, leaving QA needs aside. Although we know that not all tests can be automated, it is essential to automate at least the tests that are run as part of the continuous integration process such as unit tests or static code analysis.
Learn to look at automation from a smarter perspective
You can’t really speed up delivery processes without automating. Both the infrastructure, the environment, the configuration, the platform and the tests are written in code. Therefore, if something begins to fail or becomes ineffective, it is probably time to automate it. This will bring various benefits such as reducing delivery times, increasing repetitions and eliminating drift from the configuration.
There are several reasons why DevOps is useful for development teams, below we will examine the most important ones:
Due to its high predictability, the DevOps failure rate is quite low
This dynamic allows you to reproduce everything, in case you want to restore the previous version without problems
If the current system is disabled or some new version fails, DevOps offers a quick and easy recovery process
Time-to-market reduced to 50% through optimized software delivery. Good news for digital and mobile applications.
Infrastructure issues are built into DevOps, so teams can deliver higher quality in app development.
Security is also incorporated into the software life cycle, also helping to reduce defects.
DevOps leads to a more secure, resilient, stable and 100% auditable change operating system.
Cost efficiency during development is possible when implementing DevOps tools and models
It is based on agile programming methods, allowing it to divide the base of the largest code into smaller and more adaptable snippets.
Before DevOps arrived, the development team and operations team worked in isolation, resulting in much more time consuming than it typically takes for a real build. This model was quite inefficient as testing and implementation was carried out after design and build, plus manual code deployment gave way to multiple human errors in production, further slowing down any project.
With the implementation of DevOps the speed of development has been increased resulting in fast deliveries, the quality of updates is higher, security has also been increased, making it considered one of the most reliable systems. Today’s digital transformation has increased the demands for software development and DevOps has shown that it can cover all needs quite well and meet market expectations.
Project management in the technological field is a key element for the success of the operation since it lays the foundations and general structure of the project to be carried out, since the other elements are built on this structure, such as the design of the interface, user experience, concept validation, quality tests, among others.
For these projects to progress, we must first understand their purpose, as well as the current, future, explicit and implicit needs of our clients, their end-users, and the respective stakeholders. Subsequently, and understanding the scope of the solution, we divided the project into phases that allow us to carry it out in an agile and efficient way.
This is where the difficulty comes since we must strategically assign each activity to be carried out, understanding that we must complete one to continue with the next, however, the question constantly arises: is this task/activity “done”? Is it not related to another future task that may affect your status?
These and other questions are the most common of the thought, being the main debate that both the client (Product Owner) and the project leaders should have on the part of the software development companies.
Define the meaning of “Done”
The Definition of “Done” is when all the conditions, or acceptance criteria, that a software product must satisfy are fulfilled, so that it functionally meets the requirements of a user, client, team, or is available for use by Another system, complying with the established quality parameters. This prevents any incomplete feature/functionality from being put into production, generating errors, and rework when having to return to make the correct corrections, as well as dissatisfaction from customers and/or end-users.
This definition is determined at the beginning of each project, usually in the meetings held by the Product Owner and development leaders, however, it usually happens that as the project progresses, this definition is restated several times, so that the expectations of the same end up coupling to the practice, as well as to the different parameters of quality, work process, among others.
Having clear when a feature/functionality is “done”, then you can get a better idea of where we are in the project, allowing us to make more accurate and clear decisions about how much to complete and when the respective resources should be included in the project.
A key element, which will help us enormously in this definition, consists of the so-called User stories, these could be defined as the path that users follow within our development to carry out the desired tasks.
One of the biggest risks that can occur from not having a clear definition and understanding of the parts occurs precisely when presenting all the requirements completed during the project, precisely at the end of each iteration, where at the moment we make the necessary reviews we will realize that these requirements are not completely finalized, generating debates on real progress.
The importance of agreeing terms
When a user story or an increment (set of stories) is “Done”, everyone on the team should understand the same thing: that this functionality/increment can now go into production. Although this understanding is the result of transparency and trust between the parties, the most evident part is when you usually advance to the next phase of the project, for example: when the Scrum team finishes a sprint.
Another important aspect is to define how to complete a feature, functionality, story, or increment must be to move to the next stage. If this is of vital importance for the entire project, then it should be considered as a minimum criterion. Otherwise, part and part misunderstandings will occur again.
However, the definition of “Done” may evolve. When the relationship between the Product Owner and the Scrum team matures and develops positively, it may happen that the definition of “Done” previously used in the increments is expanded during the Retrospective and feedback meetings, including new criteria of higher quality, based on more practical parameters.
Relationship between “Done” and Acceptance Criteria
An important element to determine the status of some features, functionality, etc. It is the so-called Acceptance Criteria, its most basic definition is when it meets all the minimum requirements necessary to be able to be put into production, both on a technical and functional and operational level, among others.
The acceptance criteria are important for:
Manage expectations, both for the Product Owner and for the Scrum team.
Define scope and reduce ambiguity.
Establish test criteria for quality control.
Avoid scope change in the middle of the Sprint, which directly affects planning.
Use of the Definition of “Done” in the Phases of a Project
Once this definition is understood, as well as the minimum Acceptance Criteria, those responsible for planning the project will be able to prepare the appropriate sheet/roadmap for the development of the product/service, as well as the global estimate of the effort to invest in order to comply effectively with User Stories.
In turn, it defines the amount of work to be defined in each Sprint, since it will be used as a guide to comply with the definition of the criteria established in the planning meeting and in the subsequent follow-up and retrospective meetings. This results in better evidence at the completion of one phase of the project and the start of the next.
Impacts of the lack or misapplication of the Definition of “Done”
One of the main effects of an incomplete or misapplication of the Definition of “Done” is “Technical Debt, which refers to the additional work produced by an implementation of functionality or feature that was not completed, or that did not comply with all the minimum necessary requirements, such as having carried out the appropriate quality tests, or documentation.
As a main consequence, we find that the team must rework to finalize the minimum requirements in the previously approved functional / characteristics, which generates delays and loss of resources.
Tips to keep in mind when writing the Definition of “Done”
Involve the team: include in the planning stage the largest number of responsible team members, in this way everyone will be able to give their point of view and bring to the table topics that only could be seen later in the project. Improving the vision of the project and when one phase would end and another would begin.
Perform user stories: This valuable resource allows us to identify user needs from their point of view. The better defined they are, the easier it will be to determine when development is “Done”.
Comply with the technical requirements: it is very important that all the functionalities and technical requirements are met, so that the development has the accepted quality parameters, guaranteeing that the application has the expected performance.
Compliance with requirements: It is very important that all the functionalities and requirements are reached at a technical level, so that the development has the accepted quality parameters, guaranteeing that the application has the performance expected.
Execute functional and non-functional Testing: Validating the quality and that the technical requirements are properly tested is key. No development of any kind should be considered “Done” without having gone through the Testing & QA process.
Contemplate the Epics: where “Done” at this level can refer to a strategic priority of the organization, an objective of the business plan, or some set of requirements that satisfy a market need.
The art of managing a project goes beyond dividing it into phases and allocating resources and delivery dates, the most important thing being to thoroughly understand both the client’s requirements and to have the technical domain necessary to know how to allocate resources correctly.
Huenei’s processes included in its Agile Services Proposal include agreeing with its clients on the Definition of “Done” for their services, allowing transparency in the results and confidence in the progress of the work.
Likewise and as part of the commitment in the application of Our Values of Customer Orientation and Efficiency, we monitor and carry out performance effectiveness metrics of our teams by measuring the level of rework due to deficiencies in the application of the Definition of “Done” and its acceptance criteria.
In terms of carrying out tasks, the key is for the entire team (both internal and on behalf of the client) to be in tune and understand what is needed for software development, feature, and functionality to be fully completed, allowing progress to be made with the project like to raise the flag to indicate that something is missing.
In our previous article “Hardware Acceleration with FPGAs“, we learned about what programmable logic devices are called FPGAs (Field-Programmable Gate Array), discussed their use as a companion to CPUs to achieve this acceleration, and named some advantages of use versus GPUs (Graphics Processing Unit).
To get a better idea, we will delve into the technical aspects to understand how they generally work, from how acceleration is achieved or how technology is accessed, to some considerations for its use with a software solution.
The two main reasons why FPGAs have higher performance than CPUs would be the possibility of using custom hardware and its great parallelism power.
Let’s take as an example to perform a sum of 1,000 values, in the case of CPUs, which are made up of fixed general-purpose hardware, the process would begin with the storage of the values in RAM memory, then they pass to the internal memory of quick accesses (memory Cache) where they will be taken to load two registers.
After configuring the Logical Arithmetic Unit (ALU) to achieve the desired operation, a partial sum of these 1,000 values is performed and the result is stored in a third register. Then, a new value is taken, the partial result obtained in the previous point is loaded and the operation is performed again.
After the last iteration, the final result is saved in the Caché memory where the information will be accessible in case it is required later, being stored in the system RAM to be a consolidated value of the running program. In the case of an FPGA, the entire previous cycle is reduced to 1,000 registers, where their values are directly added at the same time.
We must bear in mind that, sometimes, the FPGA will have to read values from a system RAM memory, finding itself in a situation similar to that of the CPU. However, the advantage over CPUs is that it does not have 1,000 general-purpose registers that can be given the exclusive use of storing values for a sum. In contrast, a CPU has few registers available to be shared in performing different operations.
An example to understand the power and acceleration of FPGAs would be to visualize that we have a loop of five instructions in the CPU’s program memory, it takes data from memory, processes it and returns it. This execution would be sequential, an instruction per time-lapse, forcing them to get to the last one in order to start the process again.
In an FPGA, the equivalent of each of the CPU instructions can be executed in a parallel block, feeding each input with the previous output.
As we can see in the image, we see that the time elapsed until the first data is obtained is the same in both cases, however, the period in which 4 results are achieved in a CPU is achieved up to 16 in the FPGA.
Although it is a didactic example, keep in mind that, thanks to the average hardware with which FPGAs can be configured, the acceleration could be up to hundreds or thousands of times in a real implementation.
It should be noted that FPGAs used as accelerators that accompany a CPU has spread in recent years. On the one hand, thanks to the high transfer rates achieved by the PCI Express protocol (used on the PC to interconnect the two devices in question).
On the other, given the speeds and storage capacity offered by DDR memories. For what accelerating makes sense, the amount of data involved has to be such that it is worth the entire process of moving it to the accelerator. On the other hand, we must be in the presence of a complex mathematical algorithm, where each step requires the results of the previous step, capable of being divided and parallelizable.
The Hardware Needed for Acceleration
The two main manufacturers of FPGAs, Xilinx, and Intel, offer a variety of accelerator cards, called Alveo and PAC respectively, that connect to the PCI Express buses of a server.
When wanting to include them in our infrastructure, we must consider the specifications of the receiving server equipment, as well as the system configurations and licenses of the development software.
There are services, such as Amazon, that offer ready-to-use development images elastically, as well as instances of Xilinx hardware. Keep in mind that there are also other services, such as Microsoft Azure whose instances are based on Intel devices, or in the case of Nimbix, with support from both platforms, to name a few.
Accelerator development is a task associated with a circuit design that involves the use of a hardware description language (HDL), although you can alternatively High-Level Synthesis (HLS), a subset of the C / C ++ language. Finally, OpenCL can be used just like in developing GPU accelerators. Usually, this type of technology is binding on Electronic Engineering specialists such as programming experts.
Fortunately, both technology providers and third-parties offer ready-to-use accelerators for known and widely used algorithms. Accelerated software applications are written in C / C ++, but there are APIs available for other languages, such as Python, Java, or Scala.
In case you need to perform any additional optimization, you will need to convert C / C ++ applications on a client/server, create a plugin, or perform a binding. In addition, there are also frameworks and libraries ready to use without changes from the application, related to Machine Learning, image, and video processing, SQL and NoSQL databases, among others.
From Huenei, we can accompany you through the adoption of this technology. After analyzing your application, we can offer you the infrastructure that best suits your processes. One option is advice on the use of available frameworks, libraries, and accelerated solutions, which do not require changes to the source code.
Another alternative is refactoring using a special API with custom accelerators, or directly initiating developments with a view to using these solutions. In any case, you will have the guide of specialists who are up to date with the latest trends in this area, which are so necessary in order to face the challenges of data with exponential growth and the use of computationally complex algorithms.
The creative process to design an efficient interface and user experience is crucial in a Mobile Application Development project, since important variables depend on it, such as the adoption rate of end-users, as well as the general satisfaction index.
For this reason, software development companies place a lot of emphasis on optimizing the design and approval process, validating all proposals before starting the development phase, which is the one that consumes the most resources; resorting then to the use of tools such as the Interactive Prototype.
Today, there are many tools that provide the opportunity to review all phases of a project, stimulate feedback among team members, and not least: it gives us a fairly clear idea of what the project experience will be like. user.
Taking into account the importance of this phase in a development project, we have compiled the best tools to make an interactive prototype:
It is an application produced for MacOS, widely recognized and used by the most demanding designers to create interactive and animated prototypes in software development. Among its main functionalities, we could name the following:
Animated transitions:Flinto has an excellent transition designer that allows you to create your own animated transitions in a very simple way. You do not need to schedule, or schedule, the transition designer is quite intuitive, you just have to place any element where you want and it will be ready.
Various behaviors: With the behavior designer you can create micro-interactions within the screens, a perfect function if you want to add loop animations, buttons, scrolling animations or switches.
Scrolling feature: You can add scrollable areas with just one click and also create amazing animations based on the scrolling found in the behavior designer.
Drawing Resources: excellent drawing tools that will allow you to design simple models to your liking or edit texts and shapes inserted from Sketch. You can even make animations of vector curves.
It is a fairly intuitive and dynamic tool for creating prototypes “all in one” recommended for applications and web pages, with it you can review from wireframes to highly interactive prototypes.
JustInMind will allow you to start a project from the beginning, for example, go from clickable wireframes to 100% interactive prototypes in order to have a very broad idea of mobile gestures, online interactions, making the UX / UI work easier. Among its main functionalities are:
Efficient design: with this tool you can choose the size, style and distribution of all kinds of elements in the user interface in order to adjust it to the appearance of your screens.
Online interactions: You have the possibility to design web experiences through a wide catalog of interactions and animations, ranging from simple links to increasingly complex interactions.
Infinity of mobile gestures: You can choose from a wide variety of gestures that will allow you to slide, move, touch, rotate and even pinch the prototype of your mobile application.
Liquid design: defines liquid containers thanks to which the elements of a page will be able to adapt without problems to the different screen sizes, aspect ratios and orientations.
Automatic size adjustment: It will allow you to instinctively change the size of the groups of elements on each page, saving multiple changes in the software design and giving a real space for creation.
Featured Object Attachment: This feature enables object retention in containers or displays, a feature that combined with free movement provides more responsive experiences.
This tool gives you the facility to share and comment on a project live with the client, making it an excellent strategy to manage and streamline times and processes.
You can also manage the versions of your projects by synchronizing Illustrator or Photoshop with the app, preventing you from getting lost in a maze of folders and layers of various files. Some of its features include:
Presence of click and over-click: You can choose between click or scroll access points to indicate what are the user flows in the prototype.
Access points to other windows: InVision enables you to configure links that link to other screens in your prototype, external URL style, anchor points, quick access and more.
Template design: thinking about those recurring elements such as menus, this function enables you to configure templates with access points for this type of need and apply them globally to the prototype through a click.
Gesture Inclusion and Transitions: Gestures like double tap and swipe can be added to show interactions in the prototype and enhance the user experience in real life.
Various interactions and animations: Introduce animations such as pulling down or to the right to reveal the interaction capacity of your prototype.
Check Invision website to know more about their tool.
Are you not clear about the benefits of making an Interactive Prototype? We let you know in our blog post “Benefits of making an interactive prototype“. Fluid UI
Ideas can be prototyped in no time thanks to Fluid UI, and not only that, it is possible to share, collaborate and feed back with the opinion of your team, recommended for small applications.
Design the layout of the primary views of your app linking each view to controls that connect with others, giving way to a more interactive, dynamic and representative demonstration of the final result. Its main characteristics are:
Effective prototyping: it has a fairly complete and pre-built user interface kit to design materials, Wireframing, iOS, among others.
Collaboration in real time: provides the possibility for the entire team to work simultaneously on the same prototype.
Dynamic previews: Adding interactions to your prototype will be more fun, efficient and productive thanks to the visual linking function.
High and low fidelity: Fluid UI is compatible with any style you require, regardless of whether it is a high or low fidelity prototype.
Access from any device: it is possible to access the prototypes in the desktop app or by logging on to the Internet, without any problem.
Mobile testing available: With Fluid UI you can test your prototypes on various mobile devices, such as phones or tablets, through free playback apps.
Perhaps its most outstanding feature is the inclusion of an engine that allows you to create animations for mobile applications through a timeline that will determine how long the animation will last. Other of its functionalities are:
Easy drag and drop action: Special for moving files from the desktop directly to the panel for quick loading, making it easier for you to find all your assets.
Masking Tool: Mask, crop, frame or create animations without leaving the Proto.io editor.
Various animations available: They include scale, move, rotate, resize and fade and can be applied to any element.
Actions: With an extensive series of actions, you can navigate, use logic, control emails, GIFs, audios, videos, switch screens, make a call, visit a URL and more.
At https://proto.io/ you will have the opportunity to know all the details of this tool.
The good user experience is becoming an end rather than a means thanks to the weight it has on companies achieving their business objectives.
A satisfied user is not only more likely to use our application more, increasing the possibilities of monetization, but also tends to recommend it more, allowing more users to be reached without investing so many resources in marketing and recommendation.
If you want to know more about our development services, we invite you to visit our Mobile Development page.
The revolution derived from the rise and spread of computers posed a before and after, standing out in the dizzying
The root of this development was the born of the CPU. As we well know, these are general-purpose devices, designed to execute sequential code. However, in recent years, high computational cost applications emerged, which generated the need for specific hardware architectures.
One solution consisted of creating graphics processors (GPUs), which were introduced in the 1980s to free CPUs from demanding tasks, first with the handling of 2D pixels, and then with the rendering of 3D scenes, which involved a great increase in their computing power. The current architecture of hundreds of specialized parallel processors results in high efficiency for executing operations, which led to their use as general-purpose accelerators. However, if the problem to be solved is not perfectly adapted, development complexity increases, and performance decreases.
What are FPGAs?
Both CPUs and GPUs have fixed hardware architecture. The alternative with reconfigurable architecture, are the devices known as Field-programmable Gate Array (FPGA). They are made up of logical blocks (they solve simple functions and, optionally, save the result in a register) and a powerful interconnection matrix. They can be considered integrated blank circuits, with a great capacity for parallelism, which can be adapted to solve specific tasks in a performative way.
What are they for us?
The general concept behind the use of this technology is that a complex computationally demanding algorithm moves from an application running on the CPU to an accelerator implemented on the FPGA. When the application requires an accelerated task, the CPU transmits the data and continues with its tasks, the FPGA processes them and returns them for later use, freeing the CPU from the said task and executing it in less time.
The acceleration factor to obtain will depend on the algorithm, the amount and type of data. It can be expected from a few times to thousands, that in processes that take days to compute translates it down to hours or minutes. This is not only an improvement in the user experience but also a decrease in energy and infrastructure costs.
While any complex and demanding algorithm is potentially accelerable, there are typical use cases, such as:
Deep / Machine Learning and Artificial Intelligence in general
Predictive Analysis type applications (eg Advanced fraud detection), Classification (eg New customer segmentation, Automatic document classification), Recommendation Engines (eg Personalized Marketing), etc. Accelerated frameworks and libraries are available such as TensorFlow, Apache Spark ML, Keras, Scikit-learn, Caffe, and XG Boost.
Financial Model Analysis
Used in Banks, Fintech, Insurance, for example, to detect fraudulent transactions in real-time. Also for Risk Analysis (with CPU, banks can only perform risk models once a day, but with FPGAs, they can perform this analysis in real-time). In finance, algorithms such as the accelerated Monte Carlo are used, which estimates the variation over time of stock instruments.
Interpretation of Medical or Satellite Images, etc. with accelerated OpenCV algorithms.
Video processing in real-time: Used in all kinds of automotive applications, Retail (Analytics in Activity in Stores), Health, etc. using OpenCV accelerated algorithms and FFmpeg tools.
Real-time analysis of large volumes of data, for example, coming from IoT devices via Apache Spark.
Hybrid Data Centers, with different types of calculations for different types of tasks (CPU, GPU, FPGA, ASIC).
Cryptography, Compression, Audit of networks in real-time, etc.
Typically, accelerators are invoked from C / C ++ APIs, but languages such as Java, Python, Scala or R can also be used and distributed frameworks such as Apache Spark ML and Apache Mahout
Not all processes can be accelerated. Similar to what happens with GPUs, applying this technology erroneously entails slow execution, due to the penalty for moving data between devices.
In addition, it must be taken into account that a server infrastructure with higher requirements is required for its development and use. Finally, to achieve a high-quality final product, highly specialized know-how is needed, which is scarce in today’s market.
In recent years, FPGA technology has approached the world of software development thanks to accelerator boards and cloud services. The reasons for using them range from achieving times that are not otherwise met or improving the user experience, to reducing energy and accommodation costs.
Any software that involves complex algorithms and large amounts of data is a candidate for hardware acceleration. Some typical use cases include, but are not limited to, genomic research, financial model analysis, augmented reality and machine vision systems, big data processing, and computer network security.
Get directly to your mail the latest trends and news in Software Development, Mobile Development, UX / UI Design and Infrastructure Services, as well as in the management of Dedicated Teams and Turnkey Projects remotely.
Subscribe to our mail and start receibing all of our information.