LoTW to be Offline for 6 hours 21 December

Discussion in 'Amateur Radio News' started by K5XS, Dec 17, 2020.

ad: L-HROutlet
ad: l-rl
ad: L-MFJ
ad: Left-2
ad: Subscribe
ad: abrind-2
ad: Left-3
  1. W4BOH

    W4BOH XML Subscriber QRZ Page

    I have a visceral dislike for the big cloud operators and prefer inhouse operation, if it's reasonably effective.
    Being on the cloud requires thousands of servers, hundreds of "operators" of unknown quality, and a functioning distribution system.
    It also introduces another layer of software to manage.
    Collapse of any of these elements would leave us with NO LOTW and, potentially NO way to recover.
    Before flaming me, yes, I know our checking accounts and brokerages are there.
    Those operations have the leverage to demand good service, but even they have hiccups.
    Just imagine calling Google or Amazon from Newington to tell them there's a big contest on and hams are not getting their logs entered properly!
    My bank recently went through a merger/divestment and there have been several serious glitches.
    I've seen the Charles Schwab system suffer several data recovery missteps.
    I wonder how many rolls of mag tape it takes to hold the cloud?
    WL
     
  2. KI4KEA

    KI4KEA Ham Member QRZ Page

    Interesting. This...is actually what I do for a living.

    There really is not a cost savings on moving the “cloud environments” unless you are a very small operation or a very large operation. Places that fall in the middle are looking to avoid local issues but the same ones will exist in the cloud.

    To be clear I've never see anyone save money buy going to a cloud environment. You are really trading one environment for another.

    The biggest mistake people make is not making sure they have an exclusivity agreement with their vendors in regards to their data. Without that your data can be data mined for anything and any reason.

    However, there remain many of reasons to move to the cloud and lots of reasons not to move to the cloud. Mostly it's interoperability among applications and users. Licenses/contract fees with MS Azure or Amazon Web Services (aws)

    My guess is they have contracted out to a company to deal with this for them. That company may already have them in the cloud or at least in their data center. I can't see the ARRL having in house staff doing this.

    For the ARRL it's a matter of cost. They would have to run the numbers and see how it works for them or not.

    But let's look at the cost and how it works.

    In the case of the ARRL they are not a giant but get a lot of traffic.

    So if they went to the cloud you “virtualize” your hardware, software, networking as a service.

    You still must pay licenses for everything you have. Let’s just say it’s a clustered SQL environment. To have complete fail over you would need at least two machines running Windows and SQL, the required switching, and attached or local storage.

    To truly take advance of this you’ll need some kind of hypervisor such as VMware or Microsoft Hypervisor. So now we’re looking at this:

    Licenses for VMware/Windows/SQL or Windows/SQL depending on environment. (based on
    CPUs count these days)
    Licenses/contract fees with MS Azure or Amazon Web Services (aws)
    Duplicate paths for networking
    Share storage for data and virtualized servers
    Duplicate firewalls and routers
    And if you are smart backups in another locations for AWS or Azure…depends on your flavor.

    Now here is the interesting point. You get charged for everything you do in the cloud. The ticker is like the gas meter on your house.

    Everyone who updates to LOTW will cost you money in relation to the amount of data they send UP or Down.

    If you need to scale up, your prices scale up.

    Your admin logs in to make changes you will get billed for that up and down and whatever changes they make.

    If you have backups you will be charged for every bit you make, ever bit you move.

    We have not even talked about if it’s tiered storage (the best costs you more)

    And yes…with hypervisors you can make transitions easier between physical hardware. But the down time is probably not related to hardware (that stuff is easy) but about truing up the databases, ensuring final database connections are working properly, etc.

    At this point the real issue is LOTW is a world-wide used product. The reason for the downtime is likely related to Domain Name Services catching up with everyone.

    I can be flamed now.

    jim
     
    Last edited: Dec 20, 2020
    K3SX likes this.
  3. WD0BCT

    WD0BCT Premium Subscriber QRZ Page

    Have they found a better material than asbestos for flame resistant clothing?
     
  4. KI4KEA

    KI4KEA Ham Member QRZ Page

    I don't know. :)

    That's not my area.

    jim
     
  5. KQ8W

    KQ8W XML Subscriber QRZ Page

    Azure ingress is free; only egress is charged. I guess AWS is similar.

    On-prem flash storage is really expensive. You can save a lot on these costs alone.

    Definitely not flaming. I enjoy the discussion!

    Use proper cloud DNS, such as Route53, and your TTLs can be 10 seconds. Moving to serverless technologies can cost you significantly less than running this yourself. You can use PaaS databases, such as AWS Aurora, and have the database compute capacity stop after a timeout in activity.

    Also, you can save upwards of 60% in cloud costs if you look into reservations. As you stated, once you go to the cloud, you have to watch the meters. If you don't want to go native, look into Azure VMware Solution (AVS). They will run the VMware services for you, and it's easy to shave 30-40% of your colocation costs.
     
  6. KD5PUR

    KD5PUR Ham Member QRZ Page

    Not going to affect me, I never got it to work .
     
    KC0PUN and KK1LL like this.
  7. N2RJ

    N2RJ Ham Member QRZ Page

    You're not the first person to mention this, and it is under consideration. Thanks for the suggestion. In one of my reports for IT modernization I had suggested that the database at the very least should be in the cloud. With the database on a managed service the scaling will all be done transparently.
     
  8. KK1LL

    KK1LL XML Subscriber QRZ Page

    You do realize that you used the plural form of server and database.
     
  9. KQ8W

    KQ8W XML Subscriber QRZ Page

    Beware of having the database in the cloud and the apps on-prem. You'll most likely use a VPN to connect the two networks, and I've found some providers' VPN solutions to be not as performant as expected. If you setup connectivity, I would highly suggest using iperf to test the available bandwidth for both single and multiple processes. For one provider, I found a single process could only get about 1/10 of the advertised bandwidth.
     
  10. K2CD

    K2CD Premium Subscriber QRZ Page

    I perceive a noticeable difference for the better since it came back online. Much snappier page loads when pulling records. Good work!
     

Share This Page