Posts about Javascript

Cloud migration paths - Which ones should you choose?

Posted by Urvashi Melwani on Sep 11, 2019 3:05:00 PM

As more infrastructure and applications are experiencing a shift towards cloud in reinforcing digital transformation, one of the most critical decisions that enterprises must make well ahead of time is the best approach to cloud migration for the long-term success of their enterprises.

As per the survey conducted by Netscouts in 2018, majority of the enterprise, i.e., 56% of respondents had already started workload migration. Besides, there were 14% respondents who were in the planning stage and  rest 15% had plans to carry out the migration process in less than 6 months to 1 year.

Boxes of various lengths and table with text                                                           Source: Netscout         

And as apparent that there’s no one-sit-fits-all answer; up-front planning would make the migration process easier, and rather the whole cloud transition smoother.

So which is the best cloud migration approach for your business?

This blog takes a look at the three distinct migration approaches to help you choose the right one.

It’s time to reach the cloud

Additionally, this report also predicts that 80% of the companies are feeling the need to move their workloads to the cloud as soon as possible. And although there are multiple approaches for the same, but we will discuss the three most common here. Naturally, there are benefits and disadvantages to each:

  1. Lift and shift aka Rehost
  2. Lift, Tinker, and shift aka Replatform
  3. Refactor

1. Lift and Shift or Rehost 

Rehosting or the lift and shift approach is a forklift approach to migrate applications to the cloud without any modifications in the code. The approach involves lifting either some part of the whole application from on-premise or existing cloud environment to a new cloud environment.

Currently, it is considered as the most common migration methods. It comprises 40% of all migrations because of its agility, simplicity, and speed in comparison to re-platforming and refactoring.                        Two seperate boxes with multiple partitions

This is beneficial for the large enterprises who want to migrate quickly with minimal or no disturbance in the existing application workflow. 

And once the migration is done, it becomes quite easier for them to optimize the applications as they are already done away with the difficult part.

When to choose this approach?

“This works best for organizations looking to reduce their on-premises infrastructure expenses immediately”

Here are some common instances when enterprises should choose the rehosting approach-

  • Large number of migrations over time
    This lift-and-shift approach should be opted if it's simple, quick, and cheap and you have a lot of migrations to do overtime. Additionally, you need to factor the plan, and budget all of the post-migration work involved, like in case you have lifted and shifted non-cloud tools, processes, and people into the cloud.
  • Urgency or pain point
    A common compelling event could be the urgent evacuation of a data center or hosting provider.
    This works best for organizations looking to reduce their on-premises infrastructure expenses immediately, those bearing too much cost in maintaining physical infrastructure or if you have been faced with some cloud disaster (e.g. corrupted database). They should opt for application rehosting to get their applications on the cloud with minor or no modification and also enjoy back up of these for smooth and fast running.
  • Commercial and off-the-shelf applications
    It forms as an apt choice for organizations having some applications on-board that need to be running without any intervention or modification. These are generally commercial and off the shelf applications, and rehosting is a good strategy to first move it onto the cloud with this approach as it is and then optimize.
  • Virtualization and IaaS skillset
    If the available resources are skilled in virtualization and infrastructure as a service, then rehosting matches their skill sets (whereas Replatforming and Refactoring need more skills)
  • Test environments
    Application testing makes an important environment to run the apps successfully. However, if they aren’t managed well, it can be done easily with a lift-and-shift approach to avoid disruption.

Benefits of Rehosting

The benefits of the lift-and-shift approach are-

  • Quick migration
  • Reduced risk (simplicity)
  • Application, hypervisor, and hardware agnostic
  • Can be highly automated with limited or zero downtime
  • Imports configuration and scripts if these are not documented/ hard to reverse engineer

Limitations of the Rehosting approach

“The rehosting method does not let you reap benefits from the native cloud functionality and tools like elasticity”

The rehosting approach works because it is simpler in terms of migration.  However, it involves risks and limitations with it-

  • Migrating brittle processes

When you migrate an application, you also inherit the operating system, generally undocumented configurations, and non-cloud people and processes with it. So, if these processes are not clearly understood pre-migration, this will lead to a fragile application and a brittle end product.

  • Cloud-native features

The rehosting method does not let you reap benefits from the native cloud functionality and tools like elasticity. The app functions the way it should in a single physical server but does not let you to take advantage of added flexibility and scalability offered by cloud environments.

  • Rehosted applications are black boxes

Simply copy-pasting the applications and data without understanding what’s in them implies that you are pulling out everything into the cloud, including malware or insecure configurations.

  • Unbudgeted/planned post rehosting activities

There are always post-rehosting activities that need to be taken care of. This involves additional cost beyond the basic migration process, in regards to money, time, and resources. These activities, if avoided, will prove costly in the long run, with high expenditure incurred on over-provisioned resources.

  • Ingest known and unknown problems

If the application is facing problem outside the cloud, known or unknown, Rehosting will likely bring that problem to the cloud. Retiring technical debt is a big plus of more advanced migration methods like Replatforming and Refactoring or drop-and-shop technique of Repurchasing. 

 2. Lift, Tinker, and Shift or Replatform approach

In replatforming migration, a part of the application or the entire application is optimized with a small amount of up-versioning in API before moving to the cloud.

This varies from adding one or two functionalities to it to completely re-architecturing them before they can be rehosted or refactored and eventually deployed to the cloud. 


Multiple boxes with text written inside them

“Developers can also reuse the resources they are accustomed to working with”

The replatforming approach ensures an interim solution between rehosting and refactoring, allowing workloads to take advantage of base cloud functionality and cost optimization, without the level of resource commitment required.

Developers can also reuse the resources they are accustomed to working with, such as legacy programming languages, development frameworks, and existing caches in the application.

Replatforming can be used to add new features for better scaling and leveraging the reserved resources of your cloud environment. There are even ways to integrate the app with native features of the cloud while little or no code modifications.

When to choose this approach?

Take a look at these scenarios when to opt for this approach-

“Replatforming allows you to reshape them to make it compatible with the cloud”

  • Modification of applications is required
    Replatforming is suitable when organizations want to make changes in the API of the applications (up-versioning) and then deploy it to the cloud. This may be because the source environment is not supporting the cloud, or the organizations want some minor changes without hampering the application’s functioning.
    In such cases, some fine-tuning is required and for that re-platforming is the optimum choice.
  • Avoid post-migration work
    Organizations who deployed rehosting method realized that there is a slew of tasks that needs to be done post-migration to realize the full potential of the cloud. So, the feasible solution is to simply make the changes in the application during the migration itself. Hence, re-platforming works best in such a scenario.
  • Experience with more cloud skills
    If you have the resources available in your organization who have been working with cloud-based solutions lately and can now shape applications for cloud compatibility, or take shortcuts in the migration process, consider using the Replatforming approach.
  • Most apps are common three-tier web apps
    When most of the apps are three-tier web apps, Replatforming allows you to reshape them to make it compatible with the cloud. And once you have reshaped one, you can leverage this far and wide, making significant efforts to improve efficiencies in migration as you move forward.

Benefits of Re-platforming

“Enterprises can leverage cloud-native functionalities without worrying about the risk, complexity, cost, and time of a full refactor”

Replatforming is a cost-efficient solution. It is an optimal place of action between rehosting and refactoring, where enterprises can leverage cloud-native functionalities without worrying about the risk, complexity, cost, and time of a full refactor.

This approach does not require to adjust the cloud server to match the previous environment. Instead, you have the flexibility to start small and scale up as needed, which indicates that you can save a lot while the cloud environment grows with the app itself.

 

various elements in rectangle

Its benefits include-

  • Use of cloud-native functionalities
  • Apps can leverage the base cloud cost application
  • Helps achieve tactical benefits, like reducing the amount of time spent managing database instances
  • Reduce/ replace the common application components with a better cloud service, such as replacing Nginx in a VM with AWS Elastic Load Balancer.

Limitations of Replatforming

“If the cloud service used to replace a component is inappropriate or poorly configured, then the re-platform migration can go wrong”. 

The major risk associated with re-platforming is that the project scope can grow and change if unchecked during the process, to become a complete refactor. Managing scope and avoiding unnecessary changes is key to mitigate this risk.

Secondly, if the cloud service used to replace a component is inappropriate or poorly configured, the replatform migration can go wrong. 

Other limitations include:

  • Overly aggressive change
    Every individual shape during re-platforming increases the risk of causing problems: be circumspect and choose common, well-known shapings. Avoid exotic changes unless it’s a niche opportunity or unavoidable. The goal is a successful re-platform, not an exotic one.
  • Automation is required
    Although the re-platforming approach can be done manually, it has limitations as modifications could be time taking. A better solution, therefore, is to model the application needs using an automation platform and then make modifications to the model to represent the platform shapings.

 

Watch this video to understand further in a better way-

 

A summary of the pros and cons of each approach include:

 

Pros

Cons

Lift-and-Shift

  • Minimal work required to move application



  • Faster migration and deployment
  • Typically does not take advantage of native features of the cloud application

  • May cost more to operate in a cloud

Partial Refactor

  • Only parts of the application are modified

  • Faster migration and deployment than complete refactoring
  • Only takes advantage of some features of the cloud

  • May cost more to operate in a cloud

Complete Refactor

  • Applications typically offer higher performance


  • Applications can be optimized to operate at lower costs
  • Much higher cost since most of the part of application  must change

  • Slower time to deployment

3. Re-architect or Refactor approach

Refactoring is the process where you run your applications on the infrastructure of your cloud provider, also referred to as Platform as a Service (PaaS).

Refactoring is a bit more complex than the other two as while making changes to the code in the application, it must be ensured that they do not impact the external behavior of the application. For example, if your existing application is resource-intensive, it may cause larger cloud billing because it involves big data processing or image rendering. In that case, redesigning the application for a better resource utilization is required before moving to the cloud.


Squares, rectangles and other elements connected to each other


This approach is the most time- consuming and resource-demanding, yet it can offer the lowest monthly spend of the three approaches. And also the full potential of cloud to increase performance, resilience, and responsiveness.

When to choose this approach?

Refactoring comes in handy for the enterprises in the following scenarios-

“Refactoring method helps in reducing cost and improvements in operations, resilience, responsiveness, and security”

  • Enterprises want to leverage cloud benefits
    Refactoring is the best choice when there is a strong business requirement of appending features, scale or enhance performance by deploying cloud- which otherwise is not possible in the existing non-cloud environment. Simply put, the old ways don’t qualify the criteria and if you still stick to the old ways; your business might flip-over as an existential threat in this phase of cut-throat competition.
  • Scaling up or restructuring code
    When an organization is looking to expand its existing application or wants to restructure their code to draw off the complete potential of their cloud capabilities.
  • Boost agility
    If your organization aspires to amplify agility, improve business continuity by moving to a service-based architecture, then this strategy does the trick. And that’s despite the fact that it is often the most expensive solution in the short-medium term.
  • Efficiency is a priority
    Refactoring method helps in reducing cost and improvements in operations, resilience, responsiveness, and security.

Further, you have the option to choose between partial or complete refactor, depending upon your needs. Partial refactor involves modification of the small part of the application which results in faster migration compared to complete refactor.

Benefits of Refactoring

The benefits of refactoring are observed in the future. The current application and its environment configuration determine the complexity of refactoring, and that impacts the time-to-value from the project.

Its benefits include:

“This approach ensures an over-time reduction in costs, matching resource consumption with the demand, and eliminating the waste”

  • Long-term cost reduction
    This approach ensures an over-time reduction in costs, matching resource consumption with the demand, and eliminating the waste. Hence, this brings a better, and more lasting ROI compared to the less cloud-native applications.
  • Increase resilience
    Decoupling the application elements and attaching highly-available and managed services, the application inherits the resilience of the cloud.
  • Responsive to business events
    This approach lets application leverage the auto-scaling features of cloud services that scale up and down as per demand.

Limitations of Refactoring

The limitations are here-

  • Vendor lock-in
    The more cloud-native your application is, the more tightly it is coupled to the cloud you are in.
  • Skills
    Refactoring demands the highest level of application, automation, and cloud skills and experience to carry out the process.
  • Time
    As refactoring is the complicated method of migrating from a non-cloud application to a cloud-native application, it can consume a considerable amount of time.
  • Getting it wrong
    Refactoring involves changing everything about the application, so it has the maximum probability of things going the other way round. Each mistake will cause delays, cost imbalances, and potential outranges.

Refactoring is a complex process but it is well worth the results and improvement that you get in return. It is a resource-demanding process, one that requires plenty of time to complete. Some companies even go as far as refactoring parts of their business solutions to make the whole process more manageable. This compartmentalization could also lead to refactoring becoming longer and more resource exhausting.

Final words

Which one is the best approach?

There is no absolute answer to the question, especially since different use cases require different things. Picking one among the three approaches is a matter of finding the best that suits your specific needs. That said, start by checking if the app can be moved to a cloud environment in its entirety while maintaining cost and keeping operational efficiency in check. If the answer is yes, start with the rehost method.Boxes of various shapes with text written inside If rehosting doesn’t seem like a fit for you or if cost-efficiency is at a level that needs to be refined, you can also consider re-platforming as a good option. Remember that not all apps can be transitioned this way, so you may end up having to find other solutions entirely.

The same approach goes for refactoring. If you have enough time and resources to complete a full refactor of your current solutions, then take SaaS and other alternate solutions into consideration. 

Nevertheless, you can certainly take most of the hassle out of moving to the cloud with the right cloud migration strategy. You can then devote yourself to finding new resources to use, better flexibility to benefit from, and a more effective environment for your apps. 

Take account of these points in mind, and you’ll be able to find the best approach out of these. However, there is no defined path to success. Your organization needs may vary and delve you into adopting a combination of these approaches, i.e. hybrid approach.

For example, it is possible that after conducting a migration analysis for your organization, it is determined that:

  • 50% of your apps need to be re-hosted
  • 10% to be retained on-premises in a colocation facility
  • 40% apps, which are maintenance-level yet business-critical, are flagged for re-platforming/refactoring

What is important in the end is to plan and roll out your migration plan by conducting a thorough analysis of your complete IT system, your infrastructure, and your application workload. 

This assessment will help you determine which strategy to use and which part(s) should be moved to the cloud. 

Topics: AWS, Cloud, Javascript

Choosing the right JavaScript framework in 2017

Posted by Sanjay Rohila on Apr 11, 2017 6:02:00 PM

It’s become quite a task to keep track of all the JavaScript frameworks and libraries available today. Each of these exists because there’s definitely some need for it. But the question is, which one should you pick for your enterprise project?

The way to go about it would be to evaluate your business requirements against the pros and cons of each of the frameworks. And though we know there are a lot of frameworks out there, there are only a handful that are capable of handling enterprise projects. Mostly, JS frameworks are open-sourced, licensed, and can create single page applications with a Model-View-Controller (MVC) architecture. Yet, each of them perform differently based on the types of applications they are used in.

The Best JavaScript Frameworks

Let’s take a look at the six best JavaScript frameworks and libraries, three of the most popular ones in the past few years and three recent entrants that are fast catching up. 

best javascript framework

  • ReactJS
  • AngularJS
  • Node.js
  • Vue.js
  • Preact JS
  • Next JS

ReactJS

Released in 2013 as a component to quickly update dynamic content on Facebook, React is now maintained by the open source community. It has a Model View Controller(MVC) architecture and focuses on the efficient rendering of large data sets. 

React increases performance with a faster method of rendering content directly to the screen. It is particularly useful if you are dealing with an application that uses large data operations and dynamic content. It creates a diff of your active DOM and your virtual DOM to identify the elements that need change. It has an easy interface design and learning API, and increases performance significantly compared to other JavaScript frameworks.

Developers often use React to augment Angular, to enhance the performance of troublesome components.

On the plus side, React gives you:

Faster updates: Since it creates a minimal list of updates to be made to the real DOM to bring it in sync, rather than re-rendering the data on the entire site, it updates faster. Additionally, server-side rendering allows creating isomorphic, universal web apps.

Fewer dependencies and good code reuse: It enables easy import of components, has very few dependencies, and fantastic code reuse. It’s also efficient for JavaScript debugging.

Downside?

It’s important to remember that React is not a full framework but a loaded library with a very sophisticated view layer. 

Additionally,

Angular JS

If React is a brand new engine, Angular is a whole new car complete with an engine, wheels, and a sleek body. Released in 2009, it is an open-source JavaScript web application framework built and maintained by Google. It has over 40,000 stars on Github. 

To start with, Angular reduces download sizes by updating only the portion of the content that needs modification. Its main target is to resolve the difficulties involved in creating single-page applications (SPAs) that may load new content without leaving the page, such as in social media sites. 

Note how Facebook feed updates as you scroll down the page. For achieving this, Angular provides a client-side framework in an MVC architecture to enable the creation of high-quality brand experiences within an app featuring dynamic content.

On the plus side, Angular: 

Creates two-way data binding: This refers to event listeners that attach to data changes in an application. It reduces the size of code that developers need to build a sleek interaction and smooth content transitions.

Offers many functionalities: Despite its large framework, Angular offers numerous innovative functionalities that speed up development. Also, the growing Angular community has helped reduce the cost of building SPAs. 

Downside?

  • The problem with Angular, as with any other front-end framework, is delayed execution while rendering, updating, and manipulating huge chunks of data. 

  • Angular enables fast improvement of your app’s quality, but it also requires proficient developers for building the complex UI components. 

  • Angular has a steep learning curve. If you do not have developers well versed in TypeScript, this could be a treacherous terrain.

Node.js

Node.js is an open source, cross-platform framework built on Chrome’s JavaScript runtime. It provides fast and scalable execution for server-side and networking applications. For real-time web apps, Node.js is the best choice as the framework enables creating APIs that handles I/O based requests and demand scalability. 

On the plus side:

It’s superfast: Node.js runs on the V8 engine developed by Google. It uses a single thread and functions at a super speed. It also enables developers to create quick, robust network. Such benefits reduce the number of servers that enterprises need to use and also make applications 20 times faster. 

Additionally, with Node.js, it is possible to quickly create real-time web applications.

It visualizes requests as data streams: Node.js visualizes HTTP request and response as data streams. This means you can process files while uploading, reducing the overall processing time. Same for real-time video and audio recording. 

Additionally, managing asynchronous I/O from the root is efficient with Node.js, as is the resolution of common web and network development problems. 

It’s flexible and migrates easily: Node.js has got JavaScript both on the browser and the server side. This makes it more flexible and enables easy migration of data by allowing developers to write JavaScript for both the server and client. This means it’s convenient to coordinate the data between the server and the client smoothly, and changes made in the server appear instantly in the client.

It’s backed by a strong community: Being open-sourced, the shared repository of dynamic tools and modules add a further edge to Node.js. Over 60,000 modules in the Node Package Manager (NPM) has put it in competition with the Ruby on Rails platform, with added speed, and perfect dependency management. 

The Node.js programs get rid of inter-silo dependencies and allow enterprises to combine frontend and backend teams for smooth functioning. 

Downside?

  • Node.js requires a great deal of developer expertise, and if you do not have an experienced JS team at hand, things might be difficult. 

  • Debugging with Node.js is difficult, despite tools like the node-inspector. So once again, an experience JS team is mandatory.

  • This is compounded by the fact that module documentation for Node.js is not as advanced as the other frameworks. This means help is not as readily available. 

Vue.js

If you have backend data collected over a period of time and want them to show up without investing a lot of time and effort on the front end framework, Vue.js is your best bet.  It’s a front-end framework that uses simple objects for models, but one that exudes a lot of power. 

On the plus side, Vue.js has:

Very short pick up time: Vue.js is progressively adoptable and its owner attempts to make the development of web UI more approachable. This means any developer who chooses to work with Vue.js can learn it pretty fast, almost in minutes, and create something productive and functional, using modern, interactive web interfaces. Developers agree that this is a seriously underrated benefit.

Supports complex web architecture: One can also use its drop-in library to add simple interactivity in existing pages and even totally replace jQuery. 

Vue.js can also be used to build large single page applications. It yields to modifications and allows the addition of more complicated functionalities to simple debug user interfaces.

Allows a lot of freedom: Vue.js is lightweight, easy, and well structured; it has an excellent ecosystem around it, and allows developers a lot of freedom to adopt, compared to several other frameworks. Also, it has very few bugs and errors, which are tackled pretty fast. 

Downside?

No standardization: The only trouble with Vue.js is its lack of standardization and documentation. While developers agree that this is what makes it so adoptable and fast to learn, it can be a deterrent when a project runs for a longer time and developers change through the lifecycle. Having a standard way to do things has its own advantage and it also leads to a more consistent code base.

Preact

Developers call Preact a first-class citizen of the web platform. It can be made compatible to React, although it is not intended to be its reimplementation. Rather, Preact intends to remain small, lightweight, and focused compared to the complex and intricate React codebase.

Preact has been developed keeping in mind that the web is a stable platform and we need not be paranoid about safety. It differentiates the Virtual DOM against the DOM itself and registers real event handlers. 

On the plus side, Preact is:

Small in size: Unlike most UI frameworks, Preact is so small that the code becomes the largest part of your application. This means there’s less JavaScript to download, parse and execute, leaving more time for the code. It adjusts well with other libraries too.

Fast execution: Preact is one of the fastest Virtual DOM libraries because of its small size, and its simple and predictable diff implementation.It includes extra performance features like customizable update batching, optional async rendering, DOM recycling, and optimized event handling via Linked State. It works best to create small, functional POCs at enterprises, without spending much time or resources.

Portable: Preact is portable and can be embedded. One can use it to build parts of an app without complex integration, simply by embedding Preact into a widget and applying the same tools and techniques for building a full app.
Also, like Vue.js, Preact’s lightweightedness helps one to get immediately productive with it.

Downside?

Being a new kid on the block, the community backing and consequently the documentation, is minimal. Hence the amount of help available online is limited.

Next JS

The makers believe that universal isomorphic applications are a big part of the future of the web. That’s the reason they came up with the Next.js, which is an advanced, open-sourced, universal JavaScript framework. It is compatible with server-based web apps that run on React. 

Next.js includes an advanced feature, a component lifecycle hook, getInitialProps(), which enables the framework to perform initial rendering on the server and continue rendering on the client as required. 

On the plus side:

High speed execution: Next.js enables users to add a twist to a new web app with ease, and without requiring any major tool setup. Like React, Next.js also sets up a build process upon installation of the framework. It doesn’t need any setting up of Webpack or Babel configurations either. These benefits speed up a developer’s work to a great extent. 

Advanced technique: Universal JavaScript is an advanced technique and several frameworks have their own advanced implementations. For example, Angular 2 has Universal Angular 2 and Ember has Fastboot. Next.js combines the advanced features in its simple framework with huge potential. Ahead-of-time bundling and compilation is a critical part of an effective deployment. 

Downside?

Once again, being the new kid on the block, documentation and online support in limited. 

So that’s our assessment of the JavaScript frameworks that we think are going to be very popular in 2017. 

Srijan’s JS teams are experienced and well-equipped to deliver enterprise projects. In case you have chosen the JavaScript framework that’s best for your project, but do not have a team experienced enough to implement it, we can help. Let’s get started.

Topics: Framework and Libraries, Javascript

Recent JavaScript projects at Srijan

Posted by Nilanjana on Feb 7, 2016 4:53:00 PM

Srijan has been helping clients build new features for websites using JavaScript. Here are some of the projects where Srijan has used JavaScript successfully.

Dynamic page block and information cards for a custom writing company

Srijan created a dashboard and writer search page for the client.

We created dynamic blocks using APIs in Angular, with accordion functionality. The Search/Filter feature can filter content on the basis of keywords. 
Dynamic page block and information cards for a custom writing company

A writer search page has also been created where writers can be searched on the basis of different parameters and keywords.

Dynamic page block and information cards for a custom writing company

Tech tools - Angular.js

Quoting Application Wizard for Auto Insurance Company

Srijan built an interactive wizard which takes the user Information, and fetches related cars and drivers information using the LexisNexis API.

We generated HTML pages on the fly by consuming JSON which had all the information required to generate forms. These HTML forms have complex business logic related to drivers and vehicles. This enabled the end user to add or modify the information, as well as assign drivers to specific vehicles. The quiz pages were generated on the fly by consuming Drupal API, which gives a set of questions based on user/driver profile.

Finally, we rendered the page with quotes displaying the match scores. The end user gets the option to print and change the preferences related to quotes.

Tech Tools Used: JavaScript (Promises), jQuery, Octopus Pattern, Local Storage, Web Services

Information Aggregation for a search platform that brings local search and deals in one place

Srijan helped the client with aggregating product data from multiple e-commerce sources.

We built a Node.js application that crawls source websites using Tor as a proxy in a fixed pre-defined manner. Then each product page was processed using Cheerio to extract information, which we converted into JSON. This was saved to MongoDB.

Tech tools Used: Node.js, MongoDB, Cheerio, BlueBird (Promises), async, Tor.

Topics: Javascript

Solving a Mobile Design Challenge at the Designathon 2013

Posted by Nilanjana on Oct 25, 2013 4:02:00 PM

When we heard of the Designathon 2013, a first of its kind design event in New Delhi, many of us at Srijan were keen to participate. As the seats filled up quickly, I found myself without an entry to the event. But as luck would have it, a colleague opted out at the last minute and I got the chance to participate. Designathon is a design competition organized by Noida-based TLABS (a Times Internet company) with the theme of ‘Solving web and mobile design problems’. It was a 24-hour design challenge, in which we had to solve a web and mobile design challenge presented to us.

The event began with an address by Satyan Gajwani of Times Internet, followed by keynotes by Vijay Shekhar Sharma of One97 and Nishant Jain of Design for Use. They told us about the things to keep in mind while designing a solution. I learnt that we must always design for the experience and not the product.

We were then given four problem statements. Over a hundred participants listened carefully as the four presenters spoke about their design problems in detail. After the dinner break, the teams chose the problems they wanted to work on with.

Our team—comprising Ravish, Girish, Rajneesh and myself—represented Srijan Technologies. Srijan had two more teams at the event. We chose "Magic Pathshala", which was a problem presented by Magic Software. In this problem, we had to design a solution for rural children who were enrolled in primary schools but were not really able to learn basic arithmetic and English. For this, we had to design an offline mobile app for an entry level smartphone with pre-assessment and post-assessment modules, and a learning module for the quantification of a collection of objects. We had to use storification to achieve the learning objectives of the child.

We had an initial discussion and returned home to come back the next day with our thoughts, mind sketches, and ideas to approach the problem. We quickly got to work the next morning: we heard out each others’ ideas on the story around which the learner’s problems/puzzles had to be aligned. We zeroed in on a story which was about reuniting a lost baby elephant with its family.

Rajneesh noted down all the constraints we had to keep in mind while designing, the persona to whom the design of the mobile app was targeted at, and the logistical details about how the app would be downloaded, run, upgraded, and so on. Girish started working on creating mockups of the app screens, Ravish started sketching the wireframes. I started working on the mockups and then realised that we hadn’t made much progress on the specific problem statement at hand. So we went back to thinking and sketching the multiple learning puzzles that the learner would solve to help the story character achieve the desired goal.

As and when a game was finalised, Ravish would start sketching a wireframe for the possible app view. All this of course kept changing till we realized that only two hours were left to submit the solution. We refocused and read the problem statement over and over, tried putting the already created mockups in the final presentation that we could present to the jury. A few design mentors visited our desk but they didn’t look too convinced with the state we were in.

Other teams had more in-shape and ready screen mockups. They suggested that we create a few mockups for the app (at least for the home page). Then we received a mail from the organizers saying that they would have a screening round in which they would review the designs created by each team before the designs were submitted. That got us worried—would we be able to get through to the final round, because most of our screens were just sketches/wireframes. Ravish then started making a mockup of the home screen for the mobile app and I tried to create a mockup for the website homepage. This was also the expectation from the teams.

I shared my rough (read shabby) draft to Ravish who refined and reworked it to make it appear really good. Girish had created a few mockups for the login, registration and lessons menu for the app by then. Rajneesh quickly put these into our final presentation and in no time we were ready for the screening.

Our screening round went fine and after a few more tweaks, we submitted our presentation. As the mail went out, all of us wore big smiles on our faces as we had managed to complete the designs.

Who would present the solution to the jury? I volunteered. Soon the presentations started. After an hour or so, the organizers allowed a short break. Our turn was right after the break. As everyone went to grab some coffee, I ran to the restroom to do a round of rehearsal. And I was ready for my turn.

I got just three minutes to speak, and I was only half-through my presentation when I realized that only 30 seconds were left! I quickly showed the remaining slides and explained the most important points. Rajneesh pitched in to add some points as well. We managed to get some extra time courtesy the person who had presented the problem statement. He was not part of the jury and he negotiated for extra time with the organizers (he seemed to have really liked our solution!).

After the presentation, we were happy and everything seemed complete. I had to rush home right after the presentation but I kept checking my phone for updates from my team at the venue. Finally, I couldn’t handle the suspense anymore, and called up my team mates. And I found out that right at that minute, they were walking up to collect the runners-up prize of Rs 50,000! And I jumped with joy!

Topics: Community, Javascript

Introduction to Responsive Designs

Posted by Nilanjana on May 17, 2013 4:38:00 PM

Adoption of smartphones and tablets is rapidly increasing with the passage of time, which is further increasing importance of mobile-friendly websites.


Studies show that nowadays, people expect pages to load in two seconds or less. Slow websites are the main reason people leave e-commerce websites. People expect things to load faster on mobiles as well.
The web has always acted like a stabilizing agent for competing priorities like SEO, marketing, performance, and now device diversity. To manage diversity, many people are moving into responsive web design solutions.


We can say that a website is responsive, when the layout and/or content responds based on the size of the screen it’s presented on. A responsive website automatically changes itself to fit the device you’re reading it on. Responsive design has been directed at generally four screen sizes: the widescreen desktop monitor, the laptop, the tablet, and the mobile phone.

Elements of responsive web design:


A site designed with RWD uses CSS3 media queries, to adapt the layout to the viewing environment—along with fluid proportion-based grids and flexible images:

  • Media queries permits the page to use various CSS style rules, which are based on the traits of the device the site is being displayed on, generally the width of the browser.
  • Fluid grid concept entails page element sizing to be in relative units like percentages or ems, in place of absolute units like pixels or points.
  • Flexible images are sized in relative units, so that it can be prevented from displaying out of their containing element.


Almost everyone is talking about responsive designs these days and that too with a strong opinion.

Topics: Framework and Libraries, Javascript

TYPO3 support and maintenance services

Posted by Nilanjana on Nov 17, 2010 2:48:00 PM

What do you do if your TYPO3 website has problems, or there are security patches that are released, or you need new features developed on the site? Srijan with it's well-define support team is geared to offer support for such Production and Feature upgrade requests. Srijan also has an infrastructure team competent in performance tuning and thus scaling high-performance sites.

It maintains large TYPO3 websites such as www.clubmahindra.travel, www.greatbooks.org, www.niitimperia.comunder monthly retainer contracts.

Here are some of the support contracts we engage with customers in:

Monthly Support Retainer

You should go for this contract when you have a high-traffic website that is critical for your business, and you need a team to respond with quick turn-arounds, under a defined SLA, in case of any emergency. Under this contract Srijan offers:

  1. A 20 hour to 40 hour monthly support contract
  2. An SLA determined turnaround time for Crtical,MajorandMinor issues in the website
  3. Access to Srijan's issue tracking system on JIRA - an industry standard for this purpose
  4. A team of TYPO3 development and themeing experts, and Sys Admins ready to respond to any emergency or support request
  5. Feature enhancements within the maintenance hours defined

Please write to us for more information, the SLA, and a rate offer.

Yearly Support Retainer

Go for this model when you have a website that is low-traffic but you still require ad-hoc support through the year. Under this model Srijan offers:

  1. 50 hour or 100 hour blocks which can be used anytime through the year
  2. Since it is difficult to anticipate a request and keep hours blocked in this model, the SLA is quite relaxed
  3. Access to Srijan's issue tracking system on JIRA - an industry standard for this purpose
  4. A team of TYPO3 development and themeing experts, and Sys Admins ready to respond to any emergency or support request
  5. Feature enhancements within the maintenance hours defined

Please write to us for more information, the SLA, and a rate offer.

Topics: Javascript

TYPO3 performance tuning and testing

Posted by Nilanjana on Nov 4, 2010 12:00:00 AM

As your website grows, you might find it not as responsive as it once was.

Srijan offers a TYPO3 Performance package that'll take your slow running TYPO3 site and put some much needed zip, vroom, and bang back into site delivery. I've put together this TYPO3 performance checklist to investigate and utilities to use based upon 8 years of TYPO3 development experience.

Looking into areas mentioned in the checklist below will help your site overcome system misconfiguration, badly written extensions, bad database indexing, and high server loads.

Past tunings have seen

  • 30- to 75-second load times for non-cached pages reduced to under 7 seconds
  • 10- to 15-second load times for cached page reduced to 2 to 4 seconds
  • Server loads above 6 have been reduced to 1 or less
  • Identified media that could be reduced in file size
  • Situations where multiple servers could be combined

TYPO3 Performance Checklist

  • Benchmarking
    • Before, during, and after performance benchmarking
    • After-action summary
    • Detailed spreadsheet
  • TYPO3 Enhancements
    • Content compression
    • Logging
    • Extensions
    • File linking
    • Page caching
    • System updates
  • MySQL Database Server Enhancements
    • Connection and timeout settings
    • Key buffer settings
    • Caching
      • Table
      • Query
      • Thread
    • Error and slow query logs
    • Concurrent inserts
    • InnoDB table-type usage
  • Apache Web Server Enhancements
    • Logging
    • eAccelerator or Zend
    • Module loading
    • .htaccess
    • Multi-Processing Module
    • PHP resources
    • Services

If you're finding yourself stuck at some point, Srijan can readily help you. Please email business@srijan.in to inquiry out about TYPO3 Performance package that is hard to beat.

Topics: Javascript

"Multiple File Upload" using a Java Applet

Posted by Nilanjana on Dec 9, 2009 4:40:00 PM

A 'corrupted file upload check' has been implemented using the MD5 checksum method. In this approach, the MD5 hash of the files being uploaded will be calculated once on the client machine and then on the server after the file is uploaded on the server. Matching both the MD5 hash would do the verification.

The file upload and MD5 hash calculation is done on the client machine by a Java applet. A PHP script will do the MD5 hash calculation on server. The PHP script will also match both the MD5 hash for verification. A match of the MD5 hash will confirm file uploading without any corruption of the file data.

The Java Applet will send the file and the MD5 hash in the format as per the HTTP protocol, and hence any scripting language tool will be able to read the file data, thus giving a smooth integration with TYPO3 as well.

Topics: Framework and Libraries, Javascript

Discussion

Write to us

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms Of Service apply. By submitting this form, you agree to our Privacy Policy.

See how our uniquely collaborative work style, can help you redesign your business.

Contact us