Quantcast
Channel: Inoreader blog
Viewing all 166 articles
Browse latest View live

Stay informed everywhere with Offline Folders

$
0
0

Read this article in Chinese.

It's time for yet another major release for our Android and iOS apps. This time we are proud to bring you one of the most requested features to date - Offline Folders.

We have considered full offline mode for a long time, but synchronizing your whole account could lead to a lot of problems if you have too many subscriptions; like high CPU, battery and storage usage. It is also possible to miss certain articles, because your phone doesn't have enough storage to keep them all. That's why we have decided to implement offline mode at the folder level. This way you control which folders will be synchronized and which ones won't.

To set up offline, just swipe a folder to the right (on iOS) or long-press on it (on Android) and you will see the following screen:



You can set up the folder to download articles only on WiFi and only unread articles. We recommend keeping those two options always turned on unless you have unlimited data connection. Next up are the two more interesting options - Download images and full content. The first option is turned on by default and it will download all images from the offline articles on your device, including gifs. The second option will use our mobilizer to automatically fetch the full content of the article. Those two options can dramatically improve your offline experience, but they can also slow down the synchronization process.

When you save the settings, your folder icon will turn green indicating that it is now offline. It will continuously synchronize in the background. On Android you can control the interval from Settings -> Offline and on iOS the operating system controls this. With Offline Folders articles will be immediately available to you whether you are in the subway, on the plane or hiking in the mountains with no mobile coverage.

Offline Folders are available to Professional users only. You can take advantage of our current Black Friday promotion until Nov 25 to get 18 months for the price of 12.

This is our first iteration of our Offline Folders feature and we are committed to improving it as much as possible, so please share your feedback with us.

Update to the latest version from your app store:


               


--
The Inoreader team












Read the web in your language with Article Translations

$
0
0

Read this article in Chinese.

Great content can be found all around the web. However it's not always in your native language and depending on where you are this could be a major barrier for you. Not anymore as Inoreader now allows you to translate articles inline, without the need to leave the application! We are automatically detecting the article language and if it's different from the language of your user interface, we will offer you an option to translate it.

Here you can see the translation function in action on our iOS app. It's essentially the same on Android and web.

           


Translations can also work together with our full content function, so you can basically translate everything you see on the screen with a single click or tap. 

Article translation is available in our Professional plan on the following platforms:

--
The Inoreader team

Filters are now available in our IFTTT channel

$
0
0

If you are using IFTTT with Inoreader, you will be happy to learn that we have recently launched an updated channel with one small, yet very powerful change, Filters:



Filtered applets will only run when an article title or content matches the keyword or phrase. And of course since Inoreader is built for power users, we also allow you to use the power of regular expressions in the Filter field. Just wrap the regular expression in forward slashes, like /Cholocate cookies?/ which will match both "Chocolate cookies" and "Chocolate cookie".

Already thinking of some nice Apples to go with our new filters? Share them with our community in the comments section.

IFTTT Integration is available in our Professional plan.

--
The Inoreader team

Get news in your language with our new localized catalog

$
0
0


Many people rely on Inoreader to get news and updates primarily in the English language. Local news are often as important, but usually more difficult to find. That's why in the past few months we have worked together with partners in select countries to build a localized catalog with the best sources for localized news. Here is the list of countries:
  • France (Nouvelles de France)
  • Brazil (Notícias do Brasil)
  • China (国内新闻)
  • Russia (Новости России)
  • Germany (Nachrichten aus Deutschland)
  • Spain (Noticias de España)
  • Ukraine (Новини України)
  • Japan (日本からのニュース)
  • Poland (Wiadomości z Polski)
  • Bulgaria (Новини от България)
  • Italy (Notizie da Italia)
Additionally we have local news in English from the USA, the United Kingdom and Canada.

To access the new catalog, just open our Discovery section. On the web it's enough to click on the search bar. On Mobile just tap the + sign on the main screen.  Depending on your country you will see the correct category first.


                        

The new catalog is accessible on all devices, but for faster access we urge you to install the latest versions of our Android and iOS apps. 


We are doing our best to bring you the most refined and balanced news in your language and we have recently partnered with Deutsche Welle (DW) to deliver unbiased and reliable news for your country. Local DW news feeds are available for the following countries:
  • Brazil (5 separate topics)
  • Germany (5 separate topics)
  • Russia
  • Spain
  • Ukraine
  • France
  • China

We hope that this addition will help you find localized news more easily. If you think there is a news source in your language that we've missed, we'll be happy to learn about it. Use our comments section or contact us directly for recommendations.

If you are a publisher and you want to feature your content on our new catalog, get in touch.

-- 
The Inoreader team

Stay up-to-date with the status of our platform.

$
0
0


TLDR: We have launched a new status page for Inoreader where you can stay informed about the status of our platform and its components - status.inoreader.com. For details, read on.

Inoreader may look like a simple and light service, but there's actually a huge infrastructure behind it. We are currently running 20 powerful physical servers and twice as many virtual machines. Our article database layer and search index is more than 20 Terabytes in size and is still able to return all article requests and most search results with sub-second delay. The backend is processing millions of rules and active searches per hour, the PubSubHubbub service makes sure you receive your news in real time for supported feeds and the feed polling system takes care of the rest. Again, millions of feeds and updates are being processed every hour. And we are just scratching the surface here.

With so much going on, it's easy to overlook a problem somewhere. Fortunately though, the team at Innologica is also specialized in monitoring systems, so we have you covered!

We have our own monitoring system called Insight that watches 24x7 the whole infrastructure, from hardware to the service level. Hardware malfunctions are often discovered even before an outage occurs at the node, e.g. failing hard drives are identified and replaced before they actually cause a crash. Network performance is monitored and trends are analyzed. If a bottleneck is forming, we are already looking into it before it becomes an issue. Hard drive is near capacity? No worries, we already have an alert. Even if the whole datacenter loses connectivity, we are again in the know, because we have a second monitoring system in another location. On the service level, we are monitoring all backend processes of Inoreader, queues and script heartbeats. All of this in a single place.

Today for the first time, we are sharing some screenshots of Insight - our in-house monitoring system. You may find it looks familiar. That's because we are using our own framework for both Inoreader and Insight.

Overview of our network topology. Node colors shows the state of the element, edge colors - the network utilization.



Single node view showing current status, interfaces, alarm history and location.



Node performance view.



Service level monitoring view.



What this means to you - the Inoreader user? We want to reassure you that we are fully committed to deliver you top-quality service without interruptions and issues. However we live in the real world, not in a fairy tale, so incidents are bound happen. Our job is to minimize them and make sure they impact the service as little as possible.

Announcing our status page

All of the above has been happening in our kitchen for years without you noticing (well, apart from the overall service stability). Today we are launching our brand new status page!






Our status page is completely integrated with Insight, so any interruption or degradation will be automatically reflected there within seconds. You can bookmark it and if you experience any strange behavior from our service, you can always check the page. At the bottom there are links to contact us.

P.S. If you need a status page that you can host yourself, check out Staytus. It's amazing!

--
The Inoreader team

Filters can now permit articles too, not only mute them.

$
0
0


Since the introduction of Filtered feeds back in 2015, tens of thousands of happy Inoreader users use them every day to protect their news feeds by muting unwanted updates. Gradually we realized filtering doesn't always mean to mute something. There are viable use cases where a filter is better used to permit articles instead of muting them. What's more, many first-time users were confused by the inverted logic of filters.

Today, we are launching a simple, yet groundbreaking change to Filtered feeds. You can now permit only articles that match your criteria and remove everything else. This is now the default option too. Current filters are left untouched, so you shouldn't worry about these, just bear in mind when creating new ones.


Filtered feeds enables you to remove all the content you’re not interested in from a specific feed. They only show articles from the past month. You can always delete the filter you’ve created to see all articles from your subscription and you can also edit the filter to refine or completely change your filtering.

Filtering feeds is subject to some limitations:
  • Basic accounts can have 1 filtered feed
  • Starter accounts can have up to 10 filtered feeds
  • Plus accounts can have up to 30 filtered feeds
  • Professional accounts can have an unlimited amount of filtered feeds
If you need more freedom, consider an upgrade or get a 1-month free Pro trial.

--
The Inoreader Team

Google Drive Integration

$
0
0


For many, integrations are key part of the Inoreader experience. If you are the solo type - the one who prefers to use Inoreader as a plain reader, you might want to skip this post. However if you are still here, read on to see how you can actually benefit from this.
For those who already see the benefits from service integrations, this should be a very welcome addition.

Following the pattern of our Dropbox integration, we have integrated three major functionalities with Drive:


The major difference is that in Google Drive articles are being saved as editable Google Docs instead of PDF files. This could be very handy if you want to post-process an article before re-blogging it or to annotate it before sharing it with your Google Docs collaborators.

To get started you need to connect your Google Drive account by going to Preferences -> Integration:



To send articles to your Google Drive, just hit the button under the article or choose "In Google Drive" from the more menu (three dots):





Of course you can save articles through our mobile apps too. Just tap the Share button and choose Google Drive:


                         



Google Drive Integration is generally available to all users!

Do you like Google Drive? Do you think the new integration will be useful to you? Share your thoughts in the comments below.

--
The Inoreader team

Announcing JSON Feed support

$
0
0

You can now subscribe to JSON Feed in Inoreader!

JSON Feed seem to have gained a lot of traction in the past few days. While it does mean that we had to do some extra work to support it, we think that it was well worth it. At the moment there are just a handful of publishers on board (e.g. Daring Fireball), but when all RSS readers support the spec, we believe that there will be enough incentive for everyone to convert.



JSON is lighter and much more resilient to errors when compared to XML. To you, the user this translates to faster operation and much less failing feeds. If you take a look at the screenshot above, you will notice that I have a folder "Feeds with issues". That's because everyday we have to debug broken XML feeds... I really hope that someday this will be history.

JSON Feed also aims to fix a century-old problem - The Clone Wars by making the item id mandatory. We salute this decision! I am sure everyone has seen it in every RSS reader when an item repeats itself over and over again or a whole feed dumps its last 20 items as new. Yep, we want that to become history too.

Found some great feeds that are now using JSON? Let everyone know in the comments section.

May 25 Update: Inoreader now supports JSON Feed for outbound feeds too!

You can now syndicate your tags, folders, active searches and all other feeds via JSON Feed:



--
The Inoreader team

Inoreader Brings Online Radio to the Digital Masses with the Help of Audioburst

$
0
0


Why we chose to integrate Audioburst’s API?

For many Americans and other English-speaking nations, consuming radio news while commuting, preparing dinner or cleaning the house is a daily ritual that makes up a significant portion of their information diet. According to Statista, radio is the “third most powerful medium in the United States, reaching 54 percent of the country’s population daily” with 44 percent of the population listening to online radio based on a 2015 census.

Why Audioburst?

Audioburst is a fast growing B2B startup that indexes millions of audio segments (called simply bursts) from a variety of web sources including online radio stations, podcasts and other partners, thus currently boasting the single largest repository of personalized audio content ever collected.

The Audioburst AI allows every second of audio in radio programmes and online podcasts to be analyzed, allowing it to split massive amounts of audio content into short, 3-4 minute bursts covering specific topics.  By integrating Audioburst’s API, Inoreader offers quick snippets of all the leading news stories for consumers to easily and passively consume  information.

What content?

The content offered by the Audioburst platform will initially be organized in 6 RSS feeds covering: Tech, Entertainment, Sports, US News, World News, and Business.




We integrated smoothly with its API and are proud to present our pilot program. Starting today, Tuesday, October 17th 2017, you will be able to follow Audioburst feeds conveniently from our catalogs: 

Web




You can use the background player in Inoreader available for any podcasts, .mp3 and audio files in general. Read more about our background player here.



Android

iOS

If our pilot programme is embraced by the Inoreader community, we’ll be continuously adding more and more interesting online radio feeds covering specific niche topics. 

Now go ahead and try listening to online radio inside Inoreader!  






Black Friday - Cyber Monday 2017 and an Irresistible Offer From Inoreader

$
0
0




Greetings from the Inoreader Team!

Today marks the first day of our 2017 Black Friday Cyber Monday promo period. It is our pleasure to announce 6 months free for any annual paid plan purchased. We received a very positive reaction last year so we are giving it another go...


Upcoming Goodies

- Many of our coolest features will be getting revamped from usability and design perspective.
- A few features long awaited by the Inoreader community are coming soon (hint: Popularity Index and more)
- A massive project is underway, but still in stealth. Stay tuned...
- UPDATED: We are working with a third-party service to try and battle fake news and misinformation, more on that in future updates. Since some of our readers expressed concerns how this might affect their Inoreader experience, we want to clarify that this will be an optional feature that will only signal if the post information is potentially misleading, and not filter anything! Thank you.

Inoreader is growing not only in number of users but also as an organization, expect to see lots of new functionality and improvements in every aspect of the platform.

Regards from the Inoreader team!

Welcome to Teamblind and the Workplace Community

$
0
0

Welcome!

Huge welcome to Teamblind with a spot on our catalog, a very hot service geared towards folks in tech companies, useful when you are trying to see what other employees think of the organization you are working for (anonymously!). You can see most major tech companies already present on the platform. We especially liked this from their USPs: 

"Transparency results in voice and voice results in change, often for the better."

Also, a great place to follow topics regarding workplace discussions such as "Women in Tech", "Compensation" and more. 

How does topics work? - Topics, is designed to make Blind available to a larger audience, still primarily for those working or interested in the technology sector. The goal in building this was to allow the community to move their conversations into deeper, highly engaging topics. Segmented by profession like Engineering, Design, Product Management and even locations like Seattle, San Francisco Bay, Los Angeles, New York.

For a quick way to check out their topics' RSS feeds click below.


In our eyes, Teamblind's forums are a place not only for research, but also for making an impact on how organizations are managed, so ... double win.  

If you want to know more about their service, read on...

- What is Blind? - An anonymous community app for the workplace. 

- How to Blind? - Your company channel is private to you and your coworkers, meaning everything you post is visible across your organization.

For any further questions, please reach out to Teamblind at blindapp@teamblind.com

How to Get Instagram RSS Feed to Subscribe in Inoreader

$
0
0

We've been getting requests to integrate a feature that allows subscribing to Instagram accounts inside Inoreader. Unfortunately the Facebook team has a very conservative API which doesn't allow us to do so for now. 

However, if you want to follow Instagram accounts in the form of RSS inside Inoreader, we found some tools for you!

NB*: THE TOOLS LISTED BELOW WORK AS OF NOW, WE DO NOT GUARANTEE THAT THESE TOOLS STILL WORK WHEN YOU ARE READING THIS 




Quite straightforward, just enter this Websta URL with the account you want to follow in place of USERNAME. Voila!
https://websta.me/rss/n/USERNAME


The latest addition to Queryfeed is Instagram to RSS with some amazing filters such as GEO-LOCATION!


The quick URL format for getting an RSS feed from their service is: 
https://queryfeed.net/instagram?q=USERNAME

This is it for now, if you have more suggestions, please tell us in the comments so we can include them. 

Success Story - How Inoreader Migrated From Bare-metal Servers to OpenNebula + StorPool

$
0
0



This will be a really long post and will get a bit technical for some people. The TLDR version is that we have made a significant infrastructure upgrades in the last few months that will allow our service to scale in the coming years without any worries of downtime and potential data loss. We've been able to greatly improve our capacity and flexibility thanks to the great combination of  OpenNebula and StorPool. If you are tech geek like us, it might be interesting for you to read more about our journey.

Prolog

Building and maintaining a cloud RSS reader requires resources. Lots of them! Behind the deceivingly simple user interface there is a complex backend with huge datastore that should be able to fetch millions of feeds in time, store billions of articles indefinitely and make any of them available in just milliseconds - either by searching or simply by scrolling through lists. Even calculating the unread counts for millions of users is enough of a challenge that it deserves a special module for caching and maintaining. The very basic feature that every RSS reader should have - being able to filter only unread articles, requires so much resource power that it contributes to around 30% of the storage pressure on our first-tier databases.

We are using our own hardware collocated in a highly secure datacenter at Equinix, just 30 minutes away from our main office. This solution allows us to be independent in terms of our infrastructure. If something breaks, we don't need to wait for ticket resolution somewhere. We just go there and fix it by ourselves. The Internet knows enough horror stories, like the recent 3-day outage of Cast or how the The FBI stole an Instapaper server back in 2011. We will not allow this to happen to us.

Until very recently we were using bare-metal servers to operate our infrastructure, meaning we deployed services like database and application servers directly on the operating system of the server. We were not using virtualization except for some really small micro-services and it was practically one physical server with local storage broken down into several VMs. Last year we have reached a point where we had a 48U (rack-units) rack full of servers. Here's how it looked like in November 2017:


More than half of those servers were databases, each with its own storage. Usually 4 to 8 spinning disks in RAID-10 mode with expensive RAID controllers equipped with cache modules and BBUs. All this was required to keep up with the needed throughput.

There is one big issue with this setup. Once a database server fills up (usually at around 3TB) we buy another one and this one becomes read-only. CPUs and memory on those servers remain heavily underutilized while the storage is full.
For a long time we knew we have to do something about it, otherwise we would soon need to rent a second rack, which would have doubled our bill. The cost was not the primary concern. It just didn't feel right to have a rack full of expensive servers that we couldn't fully utilize because their storage was full.

Furthermore redundancy was an issue too. We had redundancy on the application servers, but for databases with this size it's very hard to keep everything redundant and fully backed up. Two years ago we had a major incident that almost cost us an entire server with 3TB of data, holding several months worth of article data. We have completely recovered all data, but this was close.

Big changes were needed. 

While the development of new features is important, we had to stop for a while and rethink our infrastructure. After some long sessions and meetings with vendors we have made a final decision:


We will completely virtualize our infrastructure and we will use OpenNebula + KVM for virtualization and StorPool for distributed storage. 

Cloud Management

We have chosen this solution not only because it is practically free if you don't need enterprise support but also because it is proven to be very effective. OpenNebula is now mature enough and has so many use cases it's hard to ignore. It is completely open source with big community of experts and has an optional enterprise support. KVM is now used as primary hypervisor for EC2 instances in Amazon EWS. This alone speaks a lot and OpenNebula is primarily designed to work with KVM too. Our experience with OpenNebula in the past few months didn't make us regret this decision even once.


Storage

Now a crucial part of any virtualized environment is the storage layer. You aren't really doing anything if you are still using the local storage on your servers. The whole idea of virtualization is that your physical servers are expendable. You should be able to tolerate a server outage without any data loss or service downtime. How do you achieve that? With a separate, ultra-high performance fault-tolerant storage connected to each server via redundant 10G network.

This part is even more important than the choice of a hypervisor since it holds all our (and yours too!) data. If anything ever happens there we will be exposed. So we were very careful to select the perfect solution and there were many.

There's EMC's enterprise solution, which can cost millions and uses proprietary hardware, so it's out of our league. Also big vendors doesn't usually play well with small clients like us. There's a chance that we will just have to sit and wait for a ticket resolution if something breaks, which contradicts our vision.

Then there's RedHat's Ceph, which comes completely free of charge, but we were a bit afraid to use it since nobody at the team had the required expertise to run it in production without any doubt that in any event of a crash we will be able to recover all our data. We were on a very tight schedule with this project, so we didn't have any time to send someone for trainings. Performance figures were also not very clear to us and we didn't know what to expect. So we decided not to risk with it for our main datacenter. We are now using Ceph in our backup datacenter, but more on that later.

Finally there's a one still relatively small vendor, that just so happens to be located some 15 minutes away from us - StorPool. They were recommended to us by colleagues running similar services and we had a quick kick-start meeting with them. After the meeting it was clear to us that those guys know what they are doing at the lowest possible level.
Here's what they do in a nutshell (quote from their website):

StorPool is a block-storage software that uses standard hardware and builds a storage system out of this hardware. It is installed on the servers and creates a shared storage pool from their local drives in these servers. Compared to traditional SANs, all-flash arrays, or other storage software StorPool is faster, more reliable and scalable.

Doesn't sound very different from Ceph, so why did we chose them? Here are just some of the reasons:

  • They offer full support for a very reasonable monthly fee, saving us the need to have a trained Ceph expert onboard.
  • They promise higher performance than ceph.
  • They have their own OpenNebula storage addon (yeah, Ceph does too, I know)
  • They are a local company and we can always pick up the phone and resolve any issues in minutes rather than hours or days like it usually ends up with big vendors.




And now the story begins...

The planning phase

So after we made our choice for virtualization it was time to plan the project. This happened in November 2017, so not very far from now. We have rented a second rack in our datacenter. The plan was to install the StorPool nodes there and gradually move servers and convert them into hypervisors. Once we move everything we will remove the old rack.

We have ordered 3 servers for the StorPool storage. Each of those servers have room for 16 hard-disks. We have only ordered half of the needed hard-disks, because we knew that once we start virtualizing servers, we will salvage a lot of drives that won't be needed otherwise.

We have also ordered the 10G network switches for the storage network and new Gigabit switches for the regular network to upgrade our old switches. For the storage network we chose Quanta LB8. Those beasts are equipped with 48x10G SFP+ ports, which is more than enough for a single rack. For the regular Gigabit network, we chose Quanta LB4-M. They have additional 2x10G SFP+ modules, which we used to connect the two racks via optic cable.

We also ordered a lot of other smaller stuff like 10G network cards and a lot of CPUs and DDR memory.  Initially we didn't plan to upgrade the servers before converting them to hypervisors in order to cut costs. However after some benchmarking we found that our current CPUs were not up to the task. We were using mostly dual CPU servers with Intel Xeon E5-2620 (Sandy Bridge) and they were already dragging even before the Meltdown patches. After some research we chose to upgrade all servers to E5-2650 v2 (Ivy Bridge), which is a 16-core (with Hyper-threading) CPU with a turbo frequency of 3.4 GHz. We already had two of these and benchmarks showed two-fold increase in performance compared to E5-2620.

We also decided to boost all servers to 128G of RAM. We had different configurations, but most servers were having 16-64GB and only a handful were already at 128G. So we've made some calculations and ordered 20+ CPUs and 500+GB of memory. A big shoutout here to Michael Goodliffe from ViralVPS who provided us with great discounts so we didn't overshoot our budget too much. DDR memory is very expensive at the moment!

After we placed all orders we had about a month before everything arrives, so we used that time to prepare what we can without additional hardware.

The preparation phase

We used the whole December and part of January while waiting for our equipment to arrive to prepare for the coming big migration. We learned how OpenNebula works, tried everything that came to our minds to break it and to see how it behaves in different scenarios. This was a very important part to avoid production mistakes and downtime later. 

We didn't wait for our hardware to arrive. Instead we purchased one old but still powerful server with lots of memory to temporarily hold some virtual machines. The idea was to free up some physical servers, so we can shut them down, upgrade them and convert them into hypervisors in the new rack.

The execution phase

The first thing we did was to upgrade to replace the old switches in the current rack with one Quanta LB4-M which will allow us to link to the new rack with 2x10G connection. We've also arranged the cables in a neat cable management panel. Check again the first photo in this post to see the difference. The empty switch below the panel was placed to fail-over. In case something happened with the main switch, we could easily reconnect all cables to the second, because we have mirrored the configuration to it.


We have then pulled a fiber cable to the new rack (which is situated on a different floor of the building). The new rack was still empty at this point:

  

That's 12 fibers, each of them can carry 10Gbps. More than enough for our needs :)

After setting up the network we had to wait a bit more for the StorPool servers to arrive. Due to some delays they landed in our office in late January. 


It was a late Friday afternoon, but we immediately started working on them and by the evening they were installed and ready for the guys from StorPool to take over remotely and start the stress testing and the initial setup. Kudos to the guys from StorPool for making sure the servers are ready for deployment by Monday!



Monday came and after a confirmation from StorPool that the servers are ready, we grabbed them, the 10G switches and some cables and headed straight to our datacenter to install them.

 

Several hours later the empty rack wasn't so empty anymore. We have installed the servers and configured both the 1G and 10G networks, connected it with the old rack and tested the infrastructure.

 


As you can probably see from the pictures, everything is redundant. Each server and switch has two power supplies connected to independent circuits. Each server also has two 10G network interfaces connected to different switches so in case one fails, the other will take over. The actual storage layer has 3x redundancy, which means that we can lose two servers without any data loss!

Now it was time for StorPool to finalize the configuration of the storage cluster and to give us green light to connect our first hypervisor to it. Needless to say, they were quick about it and on the next day we were able to bring in two servers from the old rack and to start our first real OpenNebula instance with StorPool as a storage.

 

If you've ever contacted our support chances are you already know this guy. That's our own Rosen (a.k.a. wesson) arranging some cables here. Notice that still only half of the drives are lit. We added 24 more drives that we salvaged from our old servers after two weeks.

After we had our shiny new OpenNebula cluster with StorPool storage fully working it was time to migrate the virtual machines that were still running on local storage. The guys from StorPool helped us a lot here by providing us with a migration strategy that we had to execute for each VM. If there is interest we can post the whole process in a separate post.

From here on we were gradually migrating physical servers to virtual machines. The strategy was different for each server, some of them were databases, others application and web servers. We've managed to migrated all of them with several seconds to no downtime at all. At first we didn't have much space for virtual machines, since we had only two hypervisors, but at each iteration we were able to convert more and more servers at once.

 

Not all servers went into our OpenNebula cluster. Some were single CPU servers and some were storage-specific machines. Those went into our new backup datacenter, but more on this later.
After that each server went through a complete change. CPUs and memory were upgraded, the expensive RAID controllers were removed from the expansion slots and in their place we installed 10G network cards. Large (>2TB) hard drives were removed and smaller drives were installed just to install the OS. Thanks to a special staging room in the datacenter we didn't have to haul them each time to our main office and back. After the servers were re-equipped, they were installed in the new rack and connected to the OpenNebula cluster. The guys from StorPool configured each server to have a connection to the storage and verified that it is ready for production use.









The first 24 leftover 2TB hard drives were immediately put to work into our StorPool cluster and the rest went into our backup datacenter.

This process was repeated several times until we ran out of servers in the old rack.



The result

In just couple of weeks of hard work we have managed to migrate everything!



In the new rack we have a total of 120TB of raw storage, 1.2TB of RAM and 336 CPU cores. Each server is connected to the network with 2x10G network interfaces.

That's roughly 3 times the capacity and 10 times the network performance of our old setup with only half the physical servers!

The flexibility of OpenNebula and StorPool allows us to use the hardware very efficiently. We can spin up virtual machines in seconds with any combination of CPU, memory, storage and network interfaces and later we can change any of those parameters just as easy. It's the DevOps heaven!

This setup will be enough for our needs for a long time and we have more than enough room for expansion if need arise.




Our OpenNebula cluster


We now have more than 50 virtual machines because we have split some physical servers into several smaller VMs with load balancers for better load distribution and we have allocated more than 46TB of storage.



We have 12 hypervisors with plenty of resources available on each of them. All of them are using the same model CPU, which gives us the ability to use the "host=passthrough" setting of QEMU to improve VM performance without the risk of VM crash during a live migration.



We are very happy with this setup. Whenever we need to start a new server, it only takes minutes to spin up a new VM instance with whatever CPU and memory configuration we need. If a server crashes, all VMs will automatically migrate to another server. OpenNebula makes it really easy to start new VMs, change their configurations, manage their lifecycle and even completely manage your networks and IP address pools. It just works!

StorPool on the other hand takes care that we have all the needed IOPS at our disposal whenever we need them.

We are using Graphite + Grafana to plot some really nice graphs for our cluster.



We have borrowed the solution from here. That's what's so great about open software!

StorPool is also using Grafana for their performance monitoring and they have also provided us with access to it, so we can get insights about what the storage is doing at the moment, which VMs are the biggest consumers, etc. This way we can always know when a VM has gone rogue and is stealing our precious IOPS.




The Black Mirror

I have mentioned a backup datacenter couple of times in this post.
Not all of our servers were good candidates for becoming hypervisors. For example some of them were not compatible with Ivy Bridge CPUs, some were thicker 2U (rack units) servers and we really wanted to have only 1U hypervisors to save rack space. Also servers are usually optimized for storage and we still had plenty of 2TB drives lying around after the migration. So we quickly put them to work!
In the basement of our office building, we have rented a room, equipped it with air conditioning and a big UPS and moved the less fortunate servers there.





Here we now have one backup server with 20TB of storage for our daily VM snapshots and database backups, 3 nodes forming a 32TB Ceph cluster for operational storage and 4 hypervisors for a second, independent OpenNebula cluster from our main one and some other lower-priority servers.

The idea is to have a fully working copy of our main datacenter that can be used in case of an emergency situation. It is intended to be a mirror for black days, so - Black Mirror. We now have all our data replicated in real time from our main data center so even in the highly unlikely event that some disaster happens there, we will still have everything off-site and we will be able to restore operations within less than an hour.

The reason we chose Ceph as a storage here is partially because we didn't need the performance of our main datacenter, but also because we are tech nerds and we like to learn technologies, so this was the perfect opportunity.

For the geeks interested in Ceph: Each of the nodes is equipped with 2 independent MegaRaid SAS 9271-4I controllers with 1G cache modules + BBU, each controlling 4 drives. 8 drives per node (2 for OS, 6 for Ceph). Drives are exported as single-disk RAID-0's because we wanted to use the controller caches as much as possible. This is usually a very expensive setup, but we had those controllers lying around after the main project was completed and we wanted to put them to work. We are open for suggestions here, especially JBOD vs RAID, so if you are feeling geeky, tell us what you think in the comments below.

OpenNebula works quite well with Ceph. It has a built-in support for Ceph datastores and the setup is quite straightforward and well documented, so I'm not going to repeat it here.

We have managed to plot some nice Ceph and OpenNebula dashboards in our Grafana too.





What's left?

We still have some connectivity issues from timetotime, because we still use a single Internet provider in both DCs, even though we have acquired our own autonomous system back in 2016 - AS49663. We never found time to actually implement it. We will try to do that in the coming months to become more independent in terms of our Internet connectivity too.


Epilog

If you made it this far - Congratulations! You have geeked out as much as we did building this infrastructure. If you have ever doubted the future of Inoreader or our company, don't do that anymore! We are here to stay and by upgrading our infrastructure with the latest and greatest technologies like OpenNebula and StorPool we are making sure that you will receive the best possible service with ultra-high performance and zero-downtime in years to come.






Mobile App Redesign and a Bunch of Improvements in Inoreader

$
0
0


Dear Inoreader fans, it’s with utmost pleasure that we introduce our sweet new mobile app redesign. It's a major update that brings an entirely new look and feel to the app!

We have just released it for iOS first and Android is coming this May.



Here is what’s new:
  • Newly organized navigation system. We feel that the new navigation is much simpler to understand and use, it's faster and will allow us to greatly expand the functionality of the app.




  • There's a completely new section called "Library", which houses your Tags, Active searches, Saved web pages and in the future will do quite a lot more. We have also added a new "Recently read" under the same group.


  • The "Eye" menu is now opened via left swipe from the right corner of the screen.




  • We have added a specially optimized article view for selected feeds. Try with "The Verge" for example. We plan to add this view to a lot more feeds.



  • Redesigned sharing and saving options


  • New default fonts optimized for reading - we have chosen Ubuntu for the UI elements. It has a contemporary style and contains characteristics unique to the Ubuntu brand that convey a “precise, reliable and free attitude.”For default body text we have chosen Merriweather as the default serif font, but it can still be changed from preferences to several other options.

  


  • Custom icons everywhere making the app feel friendlier and more intuitive, with overall focusing more on imagery not so much on explanatory texts.





  • The search functionality has been enhanced with new search history and combined search for feeds and articles.

          


  • We have changed the concept around themes. We now support "Day" and "Night" mode to be used during contextual hours.

          

  • For iPhone X we have a special AMOLED optimized "Night" mode.

          

  • The comments section have been completely redesigned too to be more user friendly.

          

  • We have also fixed a lot of bugs from the previous versions and made many performance optimizations.
  • Last but not least, we are very proud to boast a large (by our standards) beta tester community so expect lots of improvements, and any major issues to be swiftly dealt with!


All the best from our team, we want your feedback - here in the comments, the Contact Support option in the app, our forum, wherever you choose ... we are tuned in!





Announcing JSON Feed support

$
0
0

You can now subscribe to JSON Feed in Inoreader!

JSON Feed seem to have gained a lot of traction in the past few days. While it does mean that we had to do some extra work to support it, we think that it was well worth it. At the moment there are just a handful of publishers on board (e.g. Daring Fireball), but when all RSS readers support the spec, we believe that there will be enough incentive for everyone to convert.

JSON is lighter and much more resilient to errors when compared to XML. To you, the user this translates to faster operation and much less failing feeds. If you take a look at the screenshot above, you will notice that I have a folder “Feeds with issues”. That’s because everyday we have to debug broken XML feeds… I really hope that someday this will be history.

JSON Feed also aims to fix a century-old problem – The Clone Wars by making the item id mandatory. We salute this decision! I am sure everyone has seen it in every RSS reader when an item repeats itself over and over again or a whole feed dumps its last 20 items as new. Yep, we want that to become history too.

Found some great feeds that are now using JSON? Let everyone know in the comments section.

May 25 Update: Inoreader now supports JSON Feed for outbound feeds too!

You can now syndicate your tags, folders, active searches and all other feeds via JSON Feed:


The Inoreader team


Inoreader Brings Online Radio to the Digital Masses with the Help of Audioburst

$
0
0

 

Why we chose to integrate Audioburst’s API?

For many Americans and other English-speaking nations, consuming radio news while commuting, preparing dinner or cleaning the house is a daily ritual that makes up a significant portion of their information diet. According to Statista, radio is the “third most powerful medium in the United States, reaching 54 percent of the country’s population daily” with 44 percent of the population listening to online radio based on a 2015 census.

Why Audioburst?

Audioburst is a fast growing B2B startup that indexes millions of audio segments (called simply bursts) from a variety of web sources including online radio stations, podcasts and other partners, thus currently boasting the single largest repository of personalized audio content ever collected.

The Audioburst AI allows every second of audio in radio programmes and online podcasts to be analyzed, allowing it to split massive amounts of audio content into short, 3-4 minute bursts covering specific topics.  By integrating Audioburst’s API, Inoreader offers quick snippets of all the leading news stories for consumers to easily and passively consume  information.

What content?

The content offered by the Audioburst platform will initially be organized in 6 RSS feeds covering: Tech, Entertainment, Sports, US News, World News, and Business.

 

We integrated smoothly with its API and are proud to present our pilot program. Starting today, Tuesday, October 17th 2017, you will be able to follow Audioburst feeds conveniently from our catalogs:
Web
You can use the background player in Inoreader available for any podcasts, .mp3 and audio files in general. Read more about our background player here.
 
Android
iOS
If our pilot programme is embraced by the Inoreader community, we’ll be continuously adding more and more interesting online radio feeds covering specific niche topics.
Now go ahead and try listening to online radio inside Inoreader!

 

Black Friday – Cyber Monday 2017 and an Irresistible Offer From Inoreader

$
0
0

Greetings from the Inoreader Team!

Today marks the first day of our 2017 Black Friday Cyber Monday promo period. It is our pleasure to announce 6 months free for any annual paid plan purchased. We received a very positive reaction last year so we are giving it another go…

Upcoming Goodies

– Many of our coolest features will be getting revamped from usability and design perspective.
– A few features long awaited by the Inoreader community are coming soon (hint: Popularity Index and more)
– A massive project is underway, but still in stealth. Stay tuned…
– UPDATED: We are working with a third-party service to try and battle fake news and misinformation, more on that in future updates. Since some of our readers expressed concerns how this might affect their Inoreader experience, we want to clarify that this will be an optional feature that will only signal if the post information is potentially misleading, and not filter anything! Thank you.

Inoreader is growing not only in number of users but also as an organization, expect to see lots of new functionality and improvements in every aspect of the platform.
Regards from the Inoreader team!

Welcome to Teamblind and the Workplace Community

$
0
0


Welcome!

Huge welcome to Teamblind with a spot on our catalog, a very hot service geared towards folks in tech companies, useful when you are trying to see what other employees think of the organization you are working for (anonymously!). You can see most major tech companies already present on the platform. We especially liked this from their USPs: 

“Transparency results in voice and voice results in change, often for the better.”

Also, a great place to follow topics regarding workplace discussions such as “Women in Tech”, “Compensation” and more. 

How does topics work? – Topics, is designed to make Blind available to a larger audience, still primarily for those working or interested in the technology sector. The goal in building this was to allow the community to move their conversations into deeper, highly engaging topics. Segmented by profession like Engineering, Design, Product Management and even locations like Seattle, San Francisco Bay, Los Angeles, New York.

For a quick way to check out their topics’ RSS feeds click below.

In our eyes, Teamblind’s forums are a place not only for research, but also for making an impact on how organizations are managed, so … double win.  

If you want to know more about their service, read on…

– What is Blind? – An anonymous community app for the workplace. 

– How to Blind? – Your company channel is private to you and your coworkers, meaning everything you post is visible across your organization.

For any further questions, please reach out to Teamblind at blindapp@teamblind.com

How to Get Instagram RSS Feed to Subscribe in Inoreader

$
0
0

We’ve been getting requests to integrate a feature that allows subscribing to Instagram accounts inside Inoreader. Unfortunately the Facebook team has a very conservative API which doesn’t allow us to do so for now.

However, if you want to follow Instagram accounts in the form of RSS inside Inoreader, we found some tools for you!
NB*: THE TOOLS LISTED BELOW WORK AS OF NOW, WE DO NOT GUARANTEE THAT THESE TOOLS STILL WORK WHEN YOU ARE READING THIS 

Quite straightforward, just enter this Websta URL with the account you want to follow in place of USERNAME. Voila!

https://websta.me/rss/n/USERNAME

 

The latest addition to Queryfeed is Instagram to RSS with some amazing filters such as GEO-LOCATION!
The quick URL format for getting an RSS feed from their service is:
https://queryfeed.net/instagram?q=USERNAME
This is it for now, if you have more suggestions, please tell us in the comments so we can include them.

Success Story – How Inoreader Migrated From Bare-metal Servers to OpenNebula + StorPool

$
0
0



This will be a really long post and will get a bit technical for some people. The TLDR version is that we have made a significant infrastructure upgrades in the last few months that will allow our service to scale in the coming years without any worries of downtime and potential data loss. We’ve been able to greatly improve our capacity and flexibility thanks to the great combination of  OpenNebula and StorPool. If you are tech geek like us, it might be interesting for you to read more about our journey.

Prolog

Building and maintaining a cloud RSS reader requires resources. Lots of them! Behind the deceivingly simple user interface there is a complex backend with huge datastore that should be able to fetch millions of feeds in time, store billions of articles indefinitely and make any of them available in just milliseconds – either by searching or simply by scrolling through lists. Even calculating the unread counts for millions of users is enough of a challenge that it deserves a special module for caching and maintaining. The very basic feature that every RSS reader should have – being able to filter only unread articles, requires so much resource power that it contributes to around 30% of the storage pressure on our first-tier databases.

We are using our own hardware collocated in a highly secure datacenter at Equinix, just 30 minutes away from our main office. This solution allows us to be independent in terms of our infrastructure. If something breaks, we don’t need to wait for ticket resolution somewhere. We just go there and fix it by ourselves. The Internet knows enough horror stories, like the recent 3-day outage of Cast or how the The FBI stole an Instapaper server back in 2011. We will not allow this to happen to us.

Until very recently we were using bare-metal servers to operate our infrastructure, meaning we deployed services like database and application servers directly on the operating system of the server. We were not using virtualization except for some really small micro-services and it was practically one physical server with local storage broken down into several VMs. Last year we have reached a point where we had a 48U (rack-units) rack full of servers. Here’s how it looked like in November 2017:

 

 

More than half of those servers were databases, each with its own storage. Usually 4 to 8 spinning disks in RAID-10 mode with expensive RAID controllers equipped with cache modules and BBUs. All this was required to keep up with the needed throughput.

There is one big issue with this setup. Once a database server fills up (usually at around 3TB) we buy another one and this one becomes read-only. CPUs and memory on those servers remain heavily underutilized while the storage is full.
For a long time we knew we have to do something about it, otherwise we would soon need to rent a second rack, which would have doubled our bill. The cost was not the primary concern. It just didn’t feel right to have a rack full of expensive servers that we couldn’t fully utilize because their storage was full.

Furthermore redundancy was an issue too. We had redundancy on the application servers, but for databases with this size it’s very hard to keep everything redundant and fully backed up. Two years ago we had a major incident that almost cost us an entire server with 3TB of data, holding several months worth of article data. We have completely recovered all data, but this was close.

Big changes were needed.

While the development of new features is important, we had to stop for a while and rethink our infrastructure. After some long sessions and meetings with vendors we have made a final decision:

We will completely virtualize our infrastructure and we will use OpenNebula + KVM for virtualization and StorPool for distributed storage.

 

Cloud Management

We have chosen this solution not only because it is practically free if you don’t need enterprise support but also because it is proven to be very effective. OpenNebula is now mature enough and has so many use cases it’s hard to ignore. It is completely open source with big community of experts and has an optional enterprise support. KVM is now used as primary hypervisor for EC2 instances in Amazon EWS. This alone speaks a lot and OpenNebula is primarily designed to work with KVM too. Our experience with OpenNebula in the past few months didn’t make us regret this decision even once.

 

Storage

Now a crucial part of any virtualized environment is the storage layer. You aren’t really doing anything if you are still using the local storage on your servers. The whole idea of virtualization is that your physical servers are expendable. You should be able to tolerate a server outage without any data loss or service downtime. How do you achieve that? With a separate, ultra-high performance fault-tolerant storage connected to each server via redundant 10G network.

This part is even more important than the choice of a hypervisor since it holds all our (and yours too!) data. If anything ever happens there we will be exposed. So we were very careful to select the perfect solution and there were many.

There’s EMC‘s enterprise solution, which can cost millions and uses proprietary hardware, so it’s out of our league. Also big vendors doesn’t usually play well with small clients like us. There’s a chance that we will just have to sit and wait for a ticket resolution if something breaks, which contradicts our vision.

Then there’s RedHat’s Ceph, which comes completely free of charge, but we were a bit afraid to use it since nobody at the team had the required expertise to run it in production without any doubt that in any event of a crash we will be able to recover all our data. We were on a very tight schedule with this project, so we didn’t have any time to send someone for trainings. Performance figures were also not very clear to us and we didn’t know what to expect. So we decided not to risk with it for our main datacenter. We are now using Ceph in our backup datacenter, but more on that later.

Finally there’s a one still relatively small vendor, that just so happens to be located some 15 minutes away from us – StorPool. They were recommended to us by colleagues running similar services and we had a quick kick-start meeting with them. After the meeting it was clear to us that those guys know what they are doing at the lowest possible level.
Here’s what they do in a nutshell (quote from their website):

StorPool is a block-storage software that uses standard hardware and builds a storage system out of this hardware. It is installed on the servers and creates a shared storage pool from their local drives in these servers. Compared to traditional SANs, all-flash arrays, or other storage software StorPool is faster, more reliable and scalable.

Doesn’t sound very different from Ceph, so why did we chose them? Here are just some of the reasons:

  • They offer full support for a very reasonable monthly fee, saving us the need to have a trained Ceph expert onboard.
  • They promise higher performance than ceph.
  • They have their own OpenNebula storage addon (yeah, Ceph does too, I know)
  • They are a local company and we can always pick up the phone and resolve any issues in minutes rather than hours or days like it usually ends up with big vendors.

And now the story begins…

The planning phase

So after we made our choice for virtualization it was time to plan the project. This happened in November 2017, so not very far from now. We have rented a second rack in our datacenter. The plan was to install the StorPool nodes there and gradually move servers and convert them into hypervisors. Once we move everything we will remove the old rack.

We have ordered 3 servers for the StorPool storage. Each of those servers have room for 16 hard-disks. We have only ordered half of the needed hard-disks, because we knew that once we start virtualizing servers, we will salvage a lot of drives that won’t be needed otherwise.

We have also ordered the 10G network switches for the storage network and new Gigabit switches for the regular network to upgrade our old switches. For the storage network we chose Quanta LB8. Those beasts are equipped with 48x10G SFP+ ports, which is more than enough for a single rack. For the regular Gigabit network, we chose Quanta LB4-M. They have additional 2x10G SFP+ modules, which we used to connect the two racks via optic cable.

We also ordered a lot of other smaller stuff like 10G network cards and a lot of CPUs and DDR memory.  Initially we didn’t plan to upgrade the servers before converting them to hypervisors in order to cut costs. However after some benchmarking we found that our current CPUs were not up to the task. We were using mostly dual CPU servers with Intel Xeon E5-2620 (Sandy Bridge) and they were already dragging even before the Meltdown patches. After some research we chose to upgrade all servers to E5-2650 v2 (Ivy Bridge), which is a 16-core (with Hyper-threading) CPU with a turbo frequency of 3.4 GHz. We already had two of these and benchmarks showed two-fold increase in performance compared to E5-2620.

We also decided to boost all servers to 128G of RAM. We had different configurations, but most servers were having 16-64GB and only a handful were already at 128G. So we’ve made some calculations and ordered 20+ CPUs and 500+GB of memory. A big shoutout here to Michael Goodliffe from ViralVPS who provided us with great discounts so we didn’t overshoot our budget too much. DDR memory is very expensive at the moment!

After we placed all orders we had about a month before everything arrives, so we used that time to prepare what we can without additional hardware.

The preparation phase

We used the whole December and part of January while waiting for our equipment to arrive to prepare for the coming big migration. We learned how OpenNebula works, tried everything that came to our minds to break it and to see how it behaves in different scenarios. This was a very important part to avoid production mistakes and downtime later.
We didn’t wait for our hardware to arrive. Instead we purchased one old but still powerful server with lots of memory to temporarily hold some virtual machines. The idea was to free up some physical servers, so we can shut them down, upgrade them and convert them into hypervisors in the new rack.

The execution phase

The first thing we did was to upgrade to replace the old switches in the current rack with one Quanta LB4-M which will allow us to link to the new rack with 2x10G connection. We’ve also arranged the cables in a neat cable management panel. Check again the first photo in this post to see the difference. The empty switch below the panel was placed to fail-over. In case something happened with the main switch, we could easily reconnect all cables to the second, because we have mirrored the configuration to it.
We have then pulled a fiber cable to the new rack (which is situated on a different floor of the building). The new rack was still empty at this point:
  
That’s 12 fibers, each of them can carry 10Gbps. More than enough for our needs 🙂
After setting up the network we had to wait a bit more for the StorPool servers to arrive. Due to some delays they landed in our office in late January.
It was a late Friday afternoon, but we immediately started working on them and by the evening they were installed and ready for the guys from StorPool to take over remotely and start the stress testing and the initial setup. Kudos to the guys from StorPool for making sure the servers are ready for deployment by Monday!
Monday came and after a confirmation from StorPool that the servers are ready, we grabbed them, the 10G switches and some cables and headed straight to our datacenter to install them.
 
Several hours later the empty rack wasn’t so empty anymore. We have installed the servers and configured both the 1G and 10G networks, connected it with the old rack and tested the infrastructure.
 

As you can probably see from the pictures, everything is redundant. Each server and switch has two power supplies connected to independent circuits. Each server also has two 10G network interfaces connected to different switches so in case one fails, the other will take over. The actual storage layer has 3x redundancy, which means that we can lose two servers without any data loss!

Now it was time for StorPool to finalize the configuration of the storage cluster and to give us green light to connect our first hypervisor to it. Needless to say, they were quick about it and on the next day we were able to bring in two servers from the old rack and to start our first real OpenNebula instance with StorPool as a storage.
 
If you’ve ever contacted our support chances are you already know this guy. That’s our own Rosen (a.k.a. wesson) arranging some cables here. Notice that still only half of the drives are lit. We added 24 more drives that we salvaged from our old servers after two weeks.

After we had our shiny new OpenNebula cluster with StorPool storage fully working it was time to migrate the virtual machines that were still running on local storage. The guys from StorPool helped us a lot here by providing us with a migration strategy that we had to execute for each VM. If there is interest we can post the whole process in a separate post.

From here on we were gradually migrating physical servers to virtual machines. The strategy was different for each server, some of them were databases, others application and web servers. We’ve managed to migrated all of them with several seconds to no downtime at all. At first we didn’t have much space for virtual machines, since we had only two hypervisors, but at each iteration we were able to convert more and more servers at once.

 

 

 

Not all servers went into our OpenNebula cluster. Some were single CPU servers and some were storage-specific machines. Those went into our new backup datacenter, but more on this later.
After that each server went through a complete change. CPUs and memory were upgraded, the expensive RAID controllers were removed from the expansion slots and in their place we installed 10G network cards. Large (>2TB) hard drives were removed and smaller drives were installed just to install the OS. Thanks to a special staging room in the datacenter we didn’t have to haul them each time to our main office and back. After the servers were re-equipped, they were installed in the new rack and connected to the OpenNebula cluster. The guys from StorPool configured each server to have a connection to the storage and verified that it is ready for production use.

 

 

 

The first 24 leftover 2TB hard drives were immediately put to work into our StorPool cluster and the rest went into our backup datacenter.

This process was repeated several times until we ran out of servers in the old rack.

The result

In just couple of weeks of hard work we have managed to migrate everything!

In the new rack we have a total of 120TB of raw storage, 1.2TB of RAM and 336 CPU cores. Each server is connected to the network with 2x10G network interfaces.

That’s roughly 3 times the capacity and 10 times the network performance of our old setup with only half the physical servers!

The flexibility of OpenNebula and StorPool allows us to use the hardware very efficiently. We can spin up virtual machines in seconds with any combination of CPU, memory, storage and network interfaces and later we can change any of those parameters just as easy. It’s the DevOps heaven!

This setup will be enough for our needs for a long time and we have more than enough room for expansion if need arise.

 

Our OpenNebula cluster

We now have more than 50 virtual machines because we have split some physical servers into several smaller VMs with load balancers for better load distribution and we have allocated more than 46TB of storage.

We have 12 hypervisors with plenty of resources available on each of them. All of them are using the same model CPU, which gives us the ability to use the “host=passthrough” setting of QEMU to improve VM performance without the risk of VM crash during a live migration.

We are very happy with this setup. Whenever we need to start a new server, it only takes minutes to spin up a new VM instance with whatever CPU and memory configuration we need. If a server crashes, all VMs will automatically migrate to another server. OpenNebula makes it really easy to start new VMs, change their configurations, manage their lifecycle and even completely manage your networks and IP address pools. It just works!

StorPool on the other hand takes care that we have all the needed IOPS at our disposal whenever we need them.

We are using Graphite + Grafana to plot some really nice graphs for our cluster.

 

We have borrowed the solution from here. That’s what’s so great about open software!

StorPool is also using Grafana for their performance monitoring and they have also provided us with access to it, so we can get insights about what the storage is doing at the moment, which VMs are the biggest consumers, etc. This way we can always know when a VM has gone rogue and is stealing our precious IOPS.

 

The Black Mirror

I have mentioned a backup datacenter couple of times in this post.
Not all of our servers were good candidates for becoming hypervisors. For example some of them were not compatible with Ivy Bridge CPUs, some were thicker 2U (rack units) servers and we really wanted to have only 1U hypervisors to save rack space. Also servers are usually optimized for storage and we still had plenty of 2TB drives lying around after the migration. So we quickly put them to work!
In the basement of our office building, we have rented a room, equipped it with air conditioning and a big UPS and moved the less fortunate servers there.

Here we now have one backup server with 20TB of storage for our daily VM snapshots and database backups, 3 nodes forming a 32TB Ceph cluster for operational storage and 4 hypervisors for a second, independent OpenNebula cluster from our main one and some other lower-priority servers.

The idea is to have a fully working copy of our main datacenter that can be used in case of an emergency situation. It is intended to be a mirror for black days, so – Black Mirror. We now have all our data replicated in real time from our main data center so even in the highly unlikely event that some disaster happens there, we will still have everything off-site and we will be able to restore operations within less than an hour.

The reason we chose Ceph as a storage here is partially because we didn’t need the performance of our main datacenter, but also because we are tech nerds and we like to learn technologies, so this was the perfect opportunity.

For the geeks interested in Ceph: Each of the nodes is equipped with 2 independent MegaRaid SAS 9271-4I controllers with 1G cache modules + BBU, each controlling 4 drives. 8 drives per node (2 for OS, 6 for Ceph). Drives are exported as single-disk RAID-0’s because we wanted to use the controller caches as much as possible. This is usually a very expensive setup, but we had those controllers lying around after the main project was completed and we wanted to put them to work. We are open for suggestions here, especially JBOD vs RAID, so if you are feeling geeky, tell us what you think in the comments below.

OpenNebula works quite well with Ceph. It has a built-in support for Ceph datastores and the setup is quite straightforward and well documented, so I’m not going to repeat it here.

We have managed to plot some nice Ceph and OpenNebula dashboards in our Grafana too.

 

 

What’s left?

We still have some connectivity issues from time to time, because we still use a single Internet provider in both DCs, even though we have acquired our own autonomous system back in 2016 – AS49663. We never found time to actually implement it. We will try to do that in the coming months to become more independent in terms of our Internet connectivity too.

 

Epilog

If you made it this far – Congratulations! You have geeked out as much as we did building this infrastructure. If you have ever doubted the future of Inoreader or our company, don’t do that anymore! We are here to stay and by upgrading our infrastructure with the latest and greatest technologies like OpenNebula and StorPool we are making sure that you will receive the best possible service with ultra-high performance and zero-downtime in years to come.

 

 

Viewing all 166 articles
Browse latest View live