top of page
Simplifying Object Storage Management with Data Analytics 


Nutanix Objects is an on-prem object store service. To consume the storage and compute resource, user need to have a cluster (a group of hosts) so the object store can be deployed on top of it. I have worked on several features such as resource scale-out and analytics.




Role / 

Sole UX/UI Designer


Collaboration / 

PM and Engineers


Year / 

Sept, 2019- Dec, 2019

Storage consumption alert 2.png
Storage consumption alert 3.png

What is Objects?

Object storage service is mainly used by developers and storage admins who need to store less accessed data at low cost. Object storage manages data as objects, and objects are organized in each container called buckets. 

Project I

Resource Scale-out

Deploying an object store on-prem requires users to estimate how much resources and storage they are going to consume. Based on users' estimation, the system provides a list of cluster that meets the requirement. This workflow is complicated for new users. I I simplify this process by removing unnecessary configurations and changing interactions.


Screen Shot 2019-11-27 at 11.22.22
There's a lot of text instructions, scattered here and there which intend to guide users on how to enter an estimation. However, this enforce significant cognitive load to users and it's error prone.


Screen Shot 2019-11-28 at 12.22.25
I simplified the workflow and layout in 2 ways: 
- Aggregate all text-based instructions in one place so user can read everything all at once.
- Combine vCPU and Memory text input in one input stepper to prevent mistake and offload cognitive load.
See interaction above
Project II

Analytics- Storage Usage

Objects 1.0 supports object store to be deployed on 1 cluster. In 2.0, Nutanix supports object store to be deployed on multiple clusters. One major part of analytics design is storage usage. The team wants to visualize total storage usage on multiple clusters and helps storage admins to better manager their storage. However, there are a few engineering constraints which pose a challenge for design. Here's a table of what figures and stats are available from backend. Note that user cannot set a limit for object store usage on primary cluster, but can set limit on secondary clusters. 

Screen Shot 2019-11-28 at 1.09.18 AM.png

After listing out all the figures and understand the relationship between them, I took a look at the current data visualization for storage usage and found out several issues.


I decided to show total usage of multiple clusters, yet a few stakeholders in the team was arguing to show available object store storage usage as total. To convince them, I made a quick mockups and compare my design (design 1) and their proposal (design 2) to show why showing all data to users add clarity and help them to decide which action to take.

Total Storage Usage.png
Iteration 1
Group 6.png
Iteration 2


Since data in object store spread across multiple clusters, user can't choose to store data only on one cluster, even if a specific cluster is reaching limit. In this case,  the final design shows total usage first to reduce cognitive load.

Warning on Usage Limit

When other workload is taking up too much space, to an extent that it is taking up to the space intended for Object Store usage, system provides a warning, informing user about the "infringed value".  To see what actions to take, user can click on details to see which cluster's Object Store capacity is taken up by other workload.

warning 2.gif

Alert on Usage Limit

When overall usage is reaching to the pre-configured hard limit, system provides an alert. User can click on view details to investigate which cluster is reaching limit. By looking at each usage bar, user can make a decision on whether to move the needle, which is to increate limit if other workflow does not take up much storage, or add nodes to expand overall cluster storage.

Storage consumption alert 2.png
Storage consumption alert 3.png


bottom of page