Join us to learn the most effective ways to collect from insured patients and streamline patient collection while decreasing self-pay receivables
Join Dan Ward to learn how predictive analytics can uncover millions of dollars in opportunity hidden in hospital and health system data
Join Paul Bradley, PhD, to learn how predictive analytics in healthcare can help uncover millions in recoverable cash for health systems
Join ZirMed’s Ryan Feldt to see how cloud-based software combining data science and predictive analytics is helping hospitals avoid denials
Amidst the multiple major changes occurring in healthcare today, there is a persistent theme across conversations in healthcare boardrooms and executive offices:
“We know we’re leaving money on the table.”
To say there are dozens of instances when hospitals are uncompensated or underpaid for the care they provide is laughable. There are thousands.
These opportunities are scattered across millions of claims and tens of millions of potentially relevant patient-, procedure-, and provider-level data points. The only way to uncover them is through data-mining and predictive analytics – and it is increasingly vital that they be uncovered.
For hospitals, these hidden opportunities translate to millions of dollars a year – in the realm of charge-capture alone, hospitals on average leave 1–2 percent of net revenue on the table. Yet even the traditional processes for preventing and capturing more of this net revenue can drive up costs – and are almost impossible to optimize without machine-learning and predictive modeling.
You already know the importance of leveraging health IT to help improve the care of patients with one or more chronic conditions. But population health management doesn’t stop there. You’ll need to understand which clinical best practices keep patients healthy, and which operational, financial, and IT best practices keep them engaged and following their care regimens.
Read how Yuma Regional Medical Center increased annual net revenue by $10 million using ZirMed’s charge integrity solution.
Join national consultant and popular speaker Deborah Walker Keegan, PhD, as she shows how today’s most successful medical practices employ sophisticated data and business analytics, tools, and techniques to create financial value, optimize revenue, and analyze performance gaps. Attend this session to learn how to:
Take a deep dive into your accounts receivables
Make sure you’re not deceived by your billing metrics
Use key benchmarks to conduct a gap analysis
Stratify your revenue to meet the challenge of reimbursement reform
The reasons claims are denied are so varied that managing denials can feel like chasing a thousand different tails. This situation is not surprising given that a hypothetical denial rate of just 5 percent translates to tens of thousands of denied claims per year for large hospitals—where real‐world denial rates often range from 12 to […]
Healthcare providers are often challenged to allocate the resources and technologies needed to research and analyze their coding effectiveness and efficiency. Given time and resource limitations, manual audits are often limited. With predictive modeling, hospitals can automate the analysis of patient accounts to ensure the necessary codes and charges have been submitted. Predictive models can […]
Industry insiders shine a light on the current hospital revenue cycle management trends that have made an impact already over the past year and those that will play a role in the years to come. To view this white paper, please complete and submit the form below:
A fast parsimonious linear-programming-based algorithm for training neural networks is proposed that suppresses redundant features while using a minimal number of hidden units. This is achieved by propagating sideways to newly added hidden units the task of separating successive groups of unclassified points. Computational results show an improvement of 26.53% and 19.76% in tenfold cross-validation […]
Practical statistical clustering algorithms typically center upon an iterative refinement optimization procedure to compute a locally optimal clustering solution that maximizes the fit to data. These algorithms typically require many database scans to converge, and within each scan they require the access to every record in the data table Click here to view full-screen version […]
We consider practical methods for adding constraints to the K-Means clustering algorithm in order to avoid local solutions with empty clusters or clusters having very few points. Click here to view full-screen version >>
The problem of assigning m points in the n-dimensional real space Rn to k clusters is formulated as that of determining k centers in Rn such that the sum of distances of each point to the nearest center is minimized. If a polyhedral distance is used, the problem can be formulated as that of minimizing […]
The problem of discriminating between two finite point sets in n-dimensional feature space by a separating plane that utilizes as few of the features as possible, is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in the objective function can be approximated by a sigmoid […]
A theoretically justifiable fast finite successive linear approximation algorithm is proposed for obtaining a parsimonious solution to a corrupted linear system Ax = b + p, where the corruption p is due to noise or error in measurement. Click here to view full-screen version >>
Computational comparison is made between two feature selection approaches for finding a separating plane that discriminates between two point sets in an n-dimensional feature space that utilizes as few of the n features (dimensions) as possible. In the concave minimization approach [19, 5] a separating plane is generated by minimizing a weighted sum of distances […]
Practical approaches to clustering use an iterative procedure (e.g. K-Means, EM) which converges to one of numerous local minima. It is known that these iterative techniques are especially sensitive to initial starting conditions. We present a procedure for computing a refined starting condition from a given initial one that is based on an efficient technique […]
Practical clustering algorithms require multiple data scans to achieve convergence. For large databases, these scans become prohibitively expensive. We present a scalable clustering framework applicable to a wide class of iterative clustering. We require at most one scan of the database. In this work, the framework is instantiated and numerically justified with the popular K-Means […]
Iterative refinement clustering algorithms (e.g. K-Means, EM) converge to one of numerous local minima. It is known that they are especially sensitive to initial conditions. We present a procedure for computing a refined starting condition from a given initial one that is based on an efficient technique for estimating the modes of a distribution. The […]
Efficiently answering decision support queries is an important problem. Most of the work in this direction has been in the context of the data cube. Queries are efficiently answered by pre-computing large parts of the cube. Besides having large space requirements, such pre-computation requires that the hierarchy along each dimension be fixed (hence dimensions are […]
A linear support vector machine formulation is used to generate a fast, finitely terminating linear-programming algorithm for discriminating between two massive sets in n-dimensional space, where the number of points can be orders of magnitude sufficiently small linear programs that separate chunks of the data at a time. Click here to view full-screen version >>
A finite new algorithm is proposed for clustering m given points in n-dimensional real space into k clusters by generating k planes that constitute a local solution to the nonconvex problem of minimizing the sum of squares of the 2-norm distances between each point and a nearest plane. Click here to view full-screen version >>
We present a generalization of frequent itemsets allowing for the notion of errors in the itemset definition. We motivate the problem and present an efficient algorithm that identifies error tolerant frequent clusters of items in transactional data (customer purchase data, web browsing data, text, etc.). The algorithm exploits sparseness of the underlying data to find […]
Probabilistic mixture models are used for a broad range of data analysis tasks such as clustering, classification, predictive modeling, etc. Due to their inherent probabilistic nature, mixture models can easily be combined with other probabilistic or non-probabilistic techniques thus forming more complex data analysis systems. Click here to view full-screen version >>
We describe the role of generalized support vector machines in separating massive and complex data using arbitrary nonlinear kernels. Click here to view full-screen version >>
An automated data mining service offers an out- sourced, cost effective analysis option for clients desiring to leverage their data resources for decision support and operational improvement. In the context of the service model, typically the client provides the service with data and other information likely to aid in the analysis process (e.g. domain knowledge, […]
Data mining has become increasingly important as a key to analyzing, digesting and understanding the flood of digital data. Achieving this goal requires scaling mining algorithms to large databases. Click here to view full-screen version >>
Identify unique trends, find key information, and extract actionable insights to improve your revenue cycle and support population health management—see exactly which clinical and business opportunities will have the greatest impact on your success with ZirMed’s predictive analytics solutions.